Several data presentation problems involve drawing graphs so that they are easy to read and understand. Examples include circuit schematics and diagrams for information systems analysis and design. In this paper we present a bibliographic survey on algorithms whose goal is to produce aesthetically pleasing drawings of graphs. Research on this topic is spread over the broad spectrum of computer science. This bibliography constitutes a first attempt to encompass both theoretical and application-oriented papers from disparate areas.
27 paginas, 5 figuras, 16 tablas.-- This paper and its associated computer program are available via the Computer Physics Communications homepage on ScienceDirect (http://www.sciencedirect.com/ science/journal/00104655).-- El Pdf del articulo es la version pre-print: arXiv:1102.1898v1
Reducing quantum overhead A quantum computer is expected to outperform its classical counterpart in certain tasks. One such task is the factorization of large integers, the technology that underpins the security of bank cards and online privacy. Using a small-scale quantum computer comprising five trapped calcium ions, Monz et al. implement a scalable version of Shor's factorization algorithm. With the function of ions being recycled and the architecture scalable, the process is more efficient than previous implementations. The approach thus provides the potential for designing a powerful quantum computer, but with fewer resources. Science, this issue p. 1068 Integer factorization is implemented in a scalable trapped-ion–based quantum computer. Certain algorithms for quantum computers are able to outperform their classical counterparts. In 1994, Peter Shor came up with a quantum algorithm that calculates the prime factors of a large number vastly more efficiently than a classical computer. For general scalability of such algorithms, hardware, quantum error correction, and the algorithmic realization itself need to be extensible. Here we present the realization of a scalable Shor algorithm, as proposed by Kitaev. We factor the number 15 by effectively employing and controlling seven qubits and four “cache qubits” and by implementing generalized arithmetic operations, known as modular multipliers. This algorithm has been realized scalably within an ion-trap quantum computer and returns the correct factors with a confidence level exceeding 99%.
RESEARCH POSITIONS MASSACHUSETTS INSTITUTE OF TECHNOLOGY Associate Professor of Computer Science (without tenure) July 2017–present NBX Career Development Chair July 2015–present Assistant Professor of Computer Science February 2015–June 2017 Principal Investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL) February 2015–present ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE Assistant Professor of Computer Science July 2012–January 2015 MICROSOFT RESEARCH NEW ENGLAND Postdoctoral Researcher July 2011–June 2012
Sverrir Thorgeirsson, Theo B. Weidmann, Zhendong Su
Many software development platforms now support LLM-driven programming, or "vibe coding", a technique that allows one to specify programs in natural language and iterate from observed behavior, all without directly editing source code. While its adoption is accelerating, little is known about which skills best predict success in this workflow. We report a preregistered cross-sectional study with tertiary-level students (N = 100) who completed measures of computer-science achievement, domain-general cognitive skills, written-communication proficiency, and a vibe-coding assessment. Tasks were curated via an eight-expert consensus process and executed in a purpose-built, vibe-coding environment that mirrors commercial tools while enabling controlled evaluation. We find that both writing skill and CS achievement are significant predictors of vibe-coding performance, and that CS achievement remains a significant predictor after controlling for domain-general cognitive skills. The results may inform tool and curriculum design, including when to emphasize prompt-writing versus CS fundamentals to support future software creators.
The pervasiveness of Computer Science (CS) in today’s digital society and the extensive use of computational methods in other sciences call for its introduction in the school curriculum. Hence, Computer Science Education is becoming more and more relevant. In CS K-12 education, computational thinking (CT) is one of the abused buzzwords: different stakeholders (media, educators, politicians) give it different meanings, some more oriented to CS, others more linked to its interdisciplinary value. The expression was introduced by two leading researchers, Jeannette Wing (in 2006) and Seymour Papert (much early, in 1980), each of them stressing different aspects of a common theme. This paper will use a historical approach to review, discuss, and put in context these first two educational and epistemological approaches to CT. We will relate them to today’s context and evaluate what aspects are still relevant for CS K-12 education. Of the two, particular interest is devoted to “Papert’s CT,” which is the lesser-known and the lesser-studied. We will conclude that “Wing’s CT” and “Papert’s CT,” when correctly understood, are both relevant to today’s computer science education. From Wing, we should retain computer science’s centrality, CT being the (scientific and cultural) substratum of the technical competencies. Under this interpretation, CT is a lens and a set of categories for understanding the algorithmic fabric of today’s world. From Papert, we should retain the constructionist idea that only a social and affective involvement of students into the technical content will make programming an interdisciplinary tool for learning (also) other disciplines. We will also discuss the often quoted (and often unverified) claim that CT automatically “transfers” to other broad 21st century skills. Our analysis will be relevant for educators and scholars to recognize and avoid misconceptions and build on the two core roots of CT.
Feudjio Ghislain, Saha Tchinda Beaudelaire, Romain Atangana
et al.
Context: Early detection of ophthalmic diseases, such as drusen and glaucoma, can be facilitated by analyzing changes in the retinal microvascular structure. The implementation of algorithms based on convolutional neural networks (CNNs) has seen significant growth in the automation of disease identification. However, the complexity of these algorithms increases with the diversity of pathologies to be classified. In this study, we introduce a new lightweight algorithm based on CNNs for the classification of multiple categories of eye diseases, using discrete wavelet transforms to enhance feature extraction. Methods: The proposed approach integrates a simple CNN architecture optimized for multi-class and multi-label classification, with an emphasis on maintaining a compact model size. We improved the feature extraction phase by implementing multi-scale decomposition techniques, such as biorthogonal wavelet transforms, allowing us to capture both fine and coarse features. The developed model was evaluated using a dataset of retinal images categorized into four classes, including a composite class for less common pathologies. Results: The feature extraction based on biorthogonal wavelets enabled our model to achieve perfect values of precision, recall, and F1-score for half of the targeted classes. The overall average accuracy of the model reached 0.9621. Conclusion: The integration of biorthogonal wavelet transforms into our CNN model has proven effective, surpassing the performance of several similar algorithms reported in the literature. This advancement not only enhances the accuracy of real-time diagnoses but also supports the development of sophisticated tools for the detection of a wide range of retinal pathologies, thereby improving clinical decision-making processes.
Eric Burns, Christopher L. Fryer, Ivan Agullo
et al.
Astrophysical observations of the cosmos allow us to probe extreme physics and answer foundational questions on our universe. Modern astronomy is increasingly operating under a holistic approach, probing the same question with multiple diagnostics including how sources vary over time, how they appear across the electromagnetic spectrum, and through their other signatures, including gravitational waves, neutrinos, cosmic rays, and dust on Earth. Astrophysical observations are now reaching the point where approximate physics models are insufficient. Key sources of interest are explosive transients, whose understanding requires multidisciplinary studies at the intersection of astrophysics, gravity, nuclear science, plasma physics, fluid dynamics and turbulence, computation, particle physics, atomic, molecular, and optical science, condensed matter and materials science, radiation transport, and high energy density physics. This white paper provides an overview of the major scientific advances that lay at the intersection of physics and astronomy and are best probed through time-domain and multimessenger astrophysics, an exploration of how multidisciplinary science can be fostered, and introductory descriptions of the relevant scientific disciplines and key astrophysical sources of interest.
Lukas Hörmann, Wojciech G. Stark, Reinhard J. Maurer
Nanoscale design of surfaces and interfaces is essential for modern technologies like organic LEDs, batteries, fuel cells, superlubricating surfaces, and heterogeneous catalysis. However, these systems often exhibit complex surface reconstructions and polymorphism, with properties influenced by kinetic processes and dynamic behavior. A lack of accurate and scalable simulation tools has limited computational modeling of surfaces and interfaces. Recently, machine learning and data-driven methods have expanded the capabilities of theoretical modeling, enabling, for example, the routine use of machine-learned interatomic potentials to predict energies and forces across numerous structures. Despite these advances, significant challenges remain, including the scarcity of large, consistent datasets and the need for computational and data-efficient machine learning methods. Additionally, a major challenge lies in the lack of accurate reference data and electronic structure methods for interfaces. Density Functional Theory, while effective for bulk materials, is less reliable for surfaces, and too few accurate experimental studies on interface structure and stability exist. Here, we will sketch the current state of data-driven methods and machine learning in computational surface science and provide a perspective on how these methods will shape the field in the future.