In the span of four decades, quantum computation has evolved from an intellectual curiosity to a potentially realizable technology. Today, small-scale demonstrations have become possible for quantum algorithmic primitives on hundreds of physical qubits. Nevertheless, there are significant outstanding challenges in quantum hardware, fabrication, software architecture, and algorithms on the path towards a full-stack scalable quantum computing technology. Here, we provide a comprehensive review of these scaling challenges. We show how to facilitate scaling by adopting existing semiconductor technology to build much higher-quality qubits, employing systems engineering approaches, and performing distributed heterogeneous quantum-classical computing. We provide a detailed resource and sensitivity analysis for quantum applications on surface-code error-corrected quantum computers given current, target, and desired hardware specifications based on superconducting qubits, accounting for a realistic distribution of errors. We provide comprehensive resource estimates for several utility-scale applications including quantum chemistry calculations, catalyst design, NMR spectroscopy, and Fermi-Hubbard simulation. We show that orders of magnitude enhancement in performance could be obtained by a combination of hardware improvements and tight quantum-HPC integration. Furthermore, we introduce high-performance architectures for quantum-probabilistic computing with custom-designed accelerators to tackle today's industry-scale classical optimization, machine learning, and quantum simulation tasks in a cost-effective manner.
With the arrival of the era of artificial intelligence (AI) and big data, the explosive growth of data has raised higher demands on computer hardware and systems. Neuromorphic techniques inspired by biological nervous systems are expected to be one of the approaches to breaking the von Neumann bottleneck. Piezotronic neuromorphic devices modulate electrical transport characteristics by piezopotential and directly associate external mechanical motion with electrical output signals in an active manner, with the capability to sense/store/process information of external stimuli. In this review, we have presented the piezotronic neuromorphic devices (which are classified into strain-gated piezotronic transistors and piezoelectric nanogenerator-gated field effect transistors based on device structure) and discussed their operating mechanisms and related manufacture techniques. Secondly, we summarized the research progress of piezotronic neuromorphic devices in recent years and provided a detailed discussion on multifunctional applications, including bionic sensing, information storage, logic computing, and electrical/optical artificial synapses. Finally, in the context of future development, challenges, and perspectives, we have discussed how to modulate novel neuromorphic devices with piezotronic effects more effectively. It is believed that the piezotronic neuromorphic devices have great potential for the next generation of interactive sensation/memory/computation to facilitate the development of the Internet of Things, AI, biomedical engineering, etc.
Raphael Seidel, Sebastian Bock, René Zander
et al.
While significant progress has been made on the hardware side of quantum computing, support for high-level quantum programming abstractions remains underdeveloped compared to classical programming languages. In this article, we introduce Qrisp, a framework designed to bridge several gaps between high-level programming paradigms in state-of-the-art software engineering and the physical reality of today's quantum hardware. The framework aims to provide a systematic approach to quantum algorithm development such that they can be effortlessly implemented, maintained and improved. We propose a number of programming abstractions that are inspired by classical paradigms, yet consistently focus on the particular needs of a quantum developer. Unlike many other high-level language approaches, Qrisp's standout feature is its ability to compile programs to the circuit level, making them executable on most existing physical backends. The introduced abstractions enable the Qrisp compiler to leverage algorithm structure for increased compilation efficiency. Finally, we present a set of code examples, including an implementation of Shor's factoring algorithm. For the latter, the resulting circuit shows significantly reduced quantum resource requirements, strongly supporting the claim that systematic quantum algorithm development can give quantitative benefits.
Artificial compound eye technology has been a research hotspot due to its advantages, including a large field of view (FOV). However, the lack of adjustability limits its applications. Here, an adaptive compound eye (ACE) imaging device based on electrowetting liquid aperture with adjustable shape is proposed to achieve both large FOV and adaptive imaging. A method to adjust the aperture based on the electrowetting effect is proposed, which dispenses with any mechanical moving components, enabling fast adjustment, compact structure, and low power consumption. The liquid aperture can be flexibly adjusted to a roughly circular shape with a variable diameter between 0 and 5.07 mm or a horizontal or vertical elongated shape with a maximum aspect ratio of 9.5. Experimental results demonstrate the feasibility of achieving both large FOV and adaptive imaging, including light intensity adaptability and transmittable information frequency adaptability. Therefore, the proposed ACE imaging device can operate under different lighting conditions and can be used to distinguish between target and background images. Its distributed control capability also ensures that it can adapt to locally changing imaging scenes. The proposed ACE imaging device is expected to be applied in many fields such as machine vision, detection, and measurement.
Computer engineering. Computer hardware, Control engineering systems. Automatic machinery (General)
Mariela de Jesús Zhigue Macas, Javier Fernando Jaramillo Larriva, Nayade Caridad Reyes Palau
Las acciones que se dirigen desde las instituciones educativas para contribuir a la formación integral de la personalidad de los estudiantes, permiten la interiorización de conocimientos y orientaciones valorativas que se reflejan gradualmente en su comportamiento. El objetivo de la investigación realizada se orientó a diseñar y validar una estrategia pedagógica mediada por las Tecnologías de la Información y las Comunicaciones de software libre para la formación de cualidades positivas en estudiantes de sexto año en la Unidad Educativa “Enrique Malo Andrade”, Cuenca, Ecuador. El proceso investigativo se concibió desde el enfoque mixto en el cual se desarrollan procesos de recolección y análisis de datos cuantitativos y cualitativos. La caracterización inicial del proceso de formación de cualidades positivas en estudiantes, permitió inferir las principales causas de las dificultades identificadas, entre las cuales se constata que los docentes no profundizan suficientemente en el diseño del proceso de formación. La estrategia pedagógica para la formación de cualidades positivas en estudiantes, se diseñó a partir de las etapas de planeación, implementación y control, mediadas por las Tecnologías de la Información y las Comunicaciones en las que confluyen un sistema operativo, las herramientas de ofimáticas de software libre, las herramientas para el trabajo con imágenes y los navegadores de Internet. La implementación de la estrategia pedagógica posibilitó ofrecer una contribución significativa a la formación de cualidades positivas en estudiantes.
Understanding the inner workings of neural networks, including transformers, remains one of the most challenging puzzles in machine learning. This study introduces a novel approach by applying the principles of gauge symmetries, a key concept in physics, to neural network architectures. By regarding model functions as physical observables, we find that parametric redundancies of various machine learning models can be interpreted as gauge symmetries. We mathematically formulate the parametric redundancies in neural ODEs, and find that their gauge symmetries are given by spacetime diffeomorphisms, which play a fundamental role in Einstein’s theory of gravity. Viewing neural ODEs as a continuum version of feedforward neural networks, we show that the parametric redundancies in feedforward neural networks are indeed lifted to diffeomorphisms in neural ODEs. We further extend our analysis to transformer models, finding natural correspondences with neural ODEs and their gauge symmetries. The concept of gauge symmetries sheds light on the complex behavior of deep learning models through physics and provides us with a unifying perspective for analyzing various machine learning architectures.
Jean-Sébastien Coron, François Gérard, Matthias Trannoy
et al.
The main protection against side-channel attacks consists in computing every function with multiple shares via the masking countermeasure. While the masking countermeasure was originally developed for securing block-ciphers such as AES, the protection of lattice-based cryptosystems is often more challenging, because of the diversity of the underlying algorithms. In this paper, we introduce new gadgets for the high-order masking of the NTRU cryptosystem, with security proofs in the classical ISW probing model. We then describe the first fully masked implementation of the NTRU Key Encapsulation Mechanism submitted to NIST, including the key generation. To assess the practicality of our countermeasures, we provide a concrete implementation on ARM Cortex-M3 architecture, and eventually a t-test leakage evaluation.
Computer engineering. Computer hardware, Information technology
This work presents a methodology for integrating bio-energy supply chain networks with combined heat and power generation networks and heat demand of industrial processes. The industrial heat demand profile investigated involves multiple periods of operations and multiple utilities in heat exchanger networks requiring retrofit. The approach adopted involves a 3-layered superstructure, with the first layer comprising the bio-energy supply chain network that provides energy sources for steam generation. The second layer comprises the energy generation hub, where the bio-energy feedstocks of the first layer are converted to heat and power. A portion of the high-pressure steam generated in the energy hub layer and the intermediate steam levels exiting the turbines are fed to the third layer as hot utilities to satisfy the hot utility demand of the multi-period heat exchanger network. The multi-period network, which is to be simultaneously retrofitted and optimised with the networks of the first and second layers of the integrated superstructure, is also fed with steam generated from fossil sources in the second layer. The overall superstructure is modelled as a multi-objective mixed integer non-linear program considering economics and environmental impact as objectives. The solution obtained for the integrated model involves the selection of certain quantities of feedstocks from all available feedstocks supply locations. The retrofitted multi-period network competes favourably in terms of investment cost with solutions of existing methods, and it uses a mix of the available renewable and non-renewable energy sources.
Chemical engineering, Computer engineering. Computer hardware
The complexity of the physical engineering objects requires new technologies in software development able to simulate real-life cases. The huge number of such cases can be covered by object-oriented paradigm. This general idea and some advantages of using object-oriented language (Smalltalk) are exemplified by a presentation of a system for earth dam control. The system is an expert type program equipped with advanced monitoring and visualisation functions for existing dams. The software development proces starting from the requirement description is presented. The structure of the dam model and of the inference engine as well as of the class hierarchy is shown as the examples. The re-usability of the system is proved by its implementation for different earth dams.
Computer engineering. Computer hardware, Mechanics of engineering. Applied mechanics
Kernel models of potential energy surfaces (PESs) for polyatomic molecules are often restricted by a specific choice of the kernel function. This can be avoided by optimizing the complexity of the kernel function. For regression problems with very expensive data, the functional form of the model kernels can be optimized in the Gaussian process (GP) setting through compositional function search guided by the Bayesian information criterion. However, the compositional kernel search is computationally demanding and relies on greedy strategies, which may yield sub-optimal kernels. An alternative strategy of increasing complexity of GP kernels treats a GP as a Bayesian neural network (NN) with a variable number of hidden layers, which yields NNGP models. Here, we present a direct comparison of GP models with composite kernels and NNGP models for applications aiming at the construction of global PES for polyatomic molecules. We show that NNGP models of PES can be trained much more efficiently and yield better generalization accuracy without relying on any specific form of the kernel function. We illustrate that NNGP models trained by distributions of energy points at low energies produce accurate predictions of PES at high energies. We also illustrate that NNGP models can extrapolate in the input variable space by building the free energy surface of the Heisenberg model trained in the paramagnetic phase and validated in the ferromagnetic phase. By construction, composite kernels yield more accurate models than kernels with a fixed functional form. Therefore, by illustrating that NNGP models outperform GP models with composite kernels, our work suggests that NNGP models should be a preferred choice of kernel models for PES.
The quantum computing (QC) field is rapidly moving beyond the realm of pure science to become a commercially viable technology that may be able to overcome the drawbacks of traditional computing. Major technology tycoons have spent in building coding frameworks and hardware to create applications specifically designed for quantum computing over the last few years. The development of QC hardware is accelerating, however, the requirement for software-intensive methodology, approaches, procedures, instruments, roles and responsibilities for creating industrial-focused quantum software applications arises from operationalizing the QC. This paper outlines the concept of quantum software engineering (QSE) life cycle, which entails the engineering of quantum requirements, design, implementation, testing and maintenance of quantum software. This paper notably advocates for collaborative efforts between the industrial community and software engineering research to propose practical solutions to support the complete activities for the development of quantum software. The proposed vision makes it easier for researchers and practitioners to suggest new procedures, reference designs, cutting-edge equipment, and methods for utilizing quantum computers and creating the newest and most advanced quantum software.
Loris Belcastro, Riccardo Cantini, Fabrizio Marozzo
et al.
Abstract In the age of the Internet of Things and social media platforms, huge amounts of digital data are generated by and collected from many sources, including sensors, mobile devices, wearable trackers and security cameras. This data, commonly referred to as Big Data, is challenging current storage, processing, and analysis capabilities. New models, languages, systems and algorithms continue to be developed to effectively collect, store, analyze and learn from Big Data. Most of the recent surveys provide a global analysis of the tools that are used in the main phases of Big Data management (generation, acquisition, storage, querying and visualization of data). Differently, this work analyzes and reviews parallel and distributed paradigms, languages and systems used today to analyze and learn from Big Data on scalable computers. In particular, we provide an in-depth analysis of the properties of the main parallel programming paradigms (MapReduce, workflow, BSP, message passing, and SQL-like) and, through programming examples, we describe the most used systems for Big Data analysis (e.g., Hadoop, Spark, and Storm). Furthermore, we discuss and compare the different systems by highlighting the main features of each of them, their diffusion (community of developers and users) and the main advantages and disadvantages of using them to implement Big Data analysis applications. The final goal of this work is to help designers and developers in identifying and selecting the best/appropriate programming solution based on their skills, hardware availability, application domains and purposes, and also considering the support provided by the developer community.
Computer engineering. Computer hardware, Information technology
The rapid increasing interest in the usage of Hydrogen has obviously triggered a lot of safety related questions. Apart from non-trivial questions about failure frequencies and ignition probabilities, the consequence modelling of potential events also contains significant uncertainties. Something to be aware of is, that even when assuming the straightforward scenario of direct ignition of the Hydrogen release, the resulting phenomena ”jet fire” (continuous event) or “fireball” (instantaneous event) are still being modelled using traditional and potentially misleading methods. While the Hydrogen jet fire is known to have a very small impact zone around the flame itself, the commonly applied “Chamberlain” approach would result in a flame Surface Emissive Power (SEP) which is highly unrealistic for Hydrogen. Furthermore, for the instantaneous release of compressed Hydrogen, the fireball phenomenon is often modelled using typical BLEVE models. The relations in these BLEVE models correlate a radiative fraction to a vapor pressure, which is irrelevant for situations where non-Pressurized Liquefied Gases (PLG) are being studied. Both approaches result in a very high flame emissive power, while the BLEVE fireball growing and rising behaviour is also based upon experiments with liquefied gas flashing, which is a different phenomenon than the compressed gas expansion situation. Other methods that correlate fireball diameter to the expansion to the Upper Flammability Limit (UFL), are non-conservative for Hydrogen due to its very high UFL.
Because of the big uncertainty and unrealistic approaches in the current modelling, Gexcon started applying a dedicated gas fireball model in its consequence modelling tool EFFECTS. This model differs from the commonly applied BLEVE fireball approaches. While experimental data about compressed Hydrogen fireballs is still scarce, the gas fireball model is based upon relations from available literature, focussing on non PLG fireball data and available experiments providing Hydrogen flame radiation fluxes. The selected relations for fireball diameter and lift-off are similar to those for the BLEVE fireball model, but the rising and growing velocity is different, because it does not include flashing liquid behaviour. During the research of an appropriate model to simulate gas fireballs, it has been encountered that there is very little information in literature that suggests how to correlate the SEP of the fireball to the chemical properties of the substance. This radiative behaviour is highly influenced by the flame’s temperature, gas composition and potential soot formation. Because usage of a “soot fraction” would be unrealistic for substances like Hydrogen, experimentally derived values have been applied for the fireball radiative flux. Apart from the heat radiation effect, the overpressure phenomenon (blast) is also being derived using equations that differ from expanding vapour explosions.
Chemical engineering, Computer engineering. Computer hardware
Hardware security is one of the most researched areas in the field of security. It focuses on discovering and understanding attacks and countermeasures for electronic hardware that provides the “root-of-trust” for modern computing systems upon which the software stack is built. The increasing reliance on electronic devices in our everyday life has also escalated the risks of experiencing security threats on these technologies. Students today are exposed to these devices and thus require a hands-on learning experience to be aware of the threats, solutions, and future research challenges in hardware security. Currently, there are limited opportunities for students to learn and understand hardware security. A significant factor limiting exposure to these topics is the lack of an accessible, low-cost, flexible, and ready-made platform for training students on the innards of a computing system and the spectrum of security issues/solutions at the hardware-level. In this paper, we introduce the motivation and efforts behind a course named “Hands-on Hardware Security.” The Department of Electrical and Computer Engineering at the University of Florida has been offering this course for the past three years in providing experiential learning of hardware security through a set of well-designed experiments performed on a custom hardware module. We also present, in detail, the idea of a custom-designed, easy-to-understand, flexible hardware module with fundamental building blocks that can emulate a computer system and create a network of connected devices. We refer to the module as “HaHa SEP” (Hardware Hacking Security Education Platform), and it encourages students to learn and exercise “ethical hacking,” a critical concept in the hardware security field. It is the first and only known lab course offered online, where students can perform ethical hacking of a computing system using a dedicated hardware module. This paper also provides a brief introduction to the experiments performed using this module, highlighting their significance in the field of Hardware Security. Finally, it concludes with a compilation of course evaluation survey results discussing the success of this course in
Timotej Vidovic, Maja Colnik, Mojca Škerget
et al.
Plastic waste presents a significant problem for the environment as significant amounts of plastics are produced, of which the majority are still landfilled contaminating soils, waterways and aquifers. A particular challenge present thermoset plastic material, which are more difficult to recycle compared to thermoplastic materials. One of these thermoset materials are melamine resins, noted for their heat resistance and stable structure, but usually disposed in landfills after the end of their life cycle. Hydrothermal processes present a promising method to tackle the issue of reprocessing thermoset materials, as they utilize water at high temperature and pressure to convert plastic waste into useful materials.
Hydrothermal decomposition of melamine etherified resin (MER) fibres is studied in this work. The reaction occurs in a hydrothermal reactor with water at subcritical conditions. The aqueous phase extracted from the post reaction mixture was analysed using tube tests for the contents of formaldehyde, organic acid, total nitrogen, and ammonium. Environmental footprints are further analyzed based on the data obtained from experimental work, and compared regarding three different decomposition temperatures: 200, 300 and 350 °C. Footprint assessment is performed mainly using OpenLCA software and various databases. Environmental comparison of the processes is evaluated regarding to greenhouse gas (GHG), nitrogen, phosphorus, energy, and ecological footprints, and human toxicity potential. Results show that decomposition at 200 °C yielded the lowest environmental impacts. However, the highest amounts of secondary compounds were obtained when conducting the process at 300 °C.
Chemical engineering, Computer engineering. Computer hardware
Electronics, such as those used in the communication, aerospace and energy domains, often have high reliability requirements. To reduce the development and testing cost of electronics, reliability analysis needs to be incorporated into the design stage. Compared with traditional approaches, the physics of failure (PoF) methodology can better address cost reduction in the design stage. However, there are many difficulties in practical engineering applications, such as processing large amounts of engineering information simultaneously. Therefore, a flexible approach and a software system for assisting designers in developing a reliability analysis based on the PoF method in electronic product design processing are proposed. This approach integrates the PoF method and computer-aided simulation methods, such as CAD, FEM and CFD.The software system integrates functional modules such as product modeling, load-stress analysis and reliability analysis, which can help designers analyze the reliability of electronic products in actual engineering design. This system includes software and hardware that validate the simulation models. Finally, a case study is proposed in which the software system is used to analyze the filter module reliability of an industrial communication system. The results of the analysis indicate that the system can effectively promote reliability and can ensure the accuracy of analysis with high computing efficiency.