Hasil untuk "Computer engineering. Computer hardware"

Menampilkan 20 dari ~8514443 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar

JSON API
S2 Open Access 2019
1D Convolutional Neural Networks and Applications: A Survey

S. Kiranyaz, Onur Avcı, Osama Abdeljaber et al.

During the last decade, Convolutional Neural Networks (CNNs) have become the de facto standard for various Computer Vision and Machine Learning operations. CNNs are feed-forward Artificial Neural Networks (ANNs) with alternating convolutional and subsampling layers. Deep 2D CNNs with many hidden layers and millions of parameters have the ability to learn complex objects and patterns providing that they can be trained on a massive size visual database with ground-truth labels. With a proper training, this unique ability makes them the primary tool for various engineering applications for 2D signals such as images and video frames. Yet, this may not be a viable option in numerous applications over 1D signals especially when the training data is scarce or application-specific. To address this issue, 1D CNNs have recently been proposed and immediately achieved the state-of-the-art performance levels in several applications such as personalized biomedical data classification and early diagnosis, structural health monitoring, anomaly detection and identification in power electronics and motor-fault detection. Another major advantage is that a real-time and low-cost hardware implementation is feasible due to the simple and compact configuration of 1D CNNs that perform only 1D convolutions (scalar multiplications and additions). This paper presents a comprehensive review of the general architecture and principals of 1D CNNs along with their major engineering applications, especially focused on the recent progress in this field. Their state-of-the-art performance is highlighted concluding with their unique properties. The benchmark datasets and the principal 1D CNN software used in those applications are also publically shared in a dedicated website.

2460 sitasi en Computer Science, Engineering
S2 Open Access 2019
Quantum Simulators: Architectures and Opportunities

E. Altman, K. Brown, Giuseppe Carleo et al.

Quantum simulators are a promising technology on the spectrum of quantum devices from specialized quantum experiments to universal quantum computers. These quantum devices utilize entanglement and many-particle behaviors to explore and solve hard scientific, engineering, and computational problems. Rapid development over the last two decades has produced more than 300 quantum simulators in operation worldwide using a wide variety of experimental platforms. Recent advances in several physical architectures promise a golden age of quantum simulators ranging from highly optimized special purpose simulators to flexible programmable devices. These developments have enabled a convergence of ideas drawn from fundamental physics, computer science, and device engineering. They have strong potential to address problems of societal importance, ranging from understanding vital chemical processes, to enabling the design of new materials with enhanced performance, to solving complex computational problems. It is the position of the community, as represented by participants of the NSF workshop on "Programmable Quantum Simulators," that investment in a national quantum simulator program is a high priority in order to accelerate the progress in this field and to result in the first practical applications of quantum machines. Such a program should address two areas of emphasis: (1) support for creating quantum simulator prototypes usable by the broader scientific community, complementary to the present universal quantum computer effort in industry; and (2) support for fundamental research carried out by a blend of multi-investigator, multi-disciplinary collaborations with resources for quantum simulator software, hardware, and education.

463 sitasi en Computer Science, Physics
S2 Open Access 2019
Commons

Myungsun Kim

Robotic motion control methods and Programmable Logic Controllers (PLCs) are critical in engineering automation and process control applications. In most manufacturing and automation processes, robots are used for moving parts and are controlled by industrial PLCs. Proper integration of external I/O devices, sensors and actuating motors with PLC input and output cards is very important to run the process smoothly without any faults and/or safety concerns. Most traditional electrical and computer engineering (ECE) programs offer high level of motion theory and controls but little hands-on exposure to PLCs which are the main industrial controllers. This paper provides a framework for a hands-on project to integrate PLCs in robot arm motion control, troubleshooting, and testing the real sensors and motors with PLC experiments which complements the virtual calculations and theory. This PLC with Robot Arm Motion control integration concept idea was introduced and tested in a 600-level graduate capstone project class. By the end of the semester long class, the students used their PLC hardware and software skills to wire a robot arm sensing elements and actuating motors to pick and place objects from one location to a bin. The assessment demonstrated that the course learning objectives were met.

S2 Open Access 2020
MLIR: A Compiler Infrastructure for the End of Moore's Law

Chris Lattner, J. Pienaar, M. Amini et al.

This work presents MLIR, a novel approach to building reusable and extensible compiler infrastructure. MLIR aims to address software fragmentation, improve compilation for heterogeneous hardware, significantly reduce the cost of building domain specific compilers, and aid in connecting existing compilers together. MLIR facilitates the design and implementation of code generators, translators and optimizers at different levels of abstraction and also across application domains, hardware targets and execution environments. The contribution of this work includes (1) discussion of MLIR as a research artifact, built for extension and evolution, and identifying the challenges and opportunities posed by this novel design point in design, semantics, optimization specification, system, and engineering. (2) evaluation of MLIR as a generalized infrastructure that reduces the cost of building compilers-describing diverse use-cases to show research and educational opportunities for future programming languages, compilers, execution environments, and computer architecture. The paper also presents the rationale for MLIR, its original design principles, structures and semantics.

305 sitasi en Computer Science
S2 Open Access 2021
Commercial applications of quantum computing

Francesco Bova, Avi Goldfarb, R. Melko

Despite the scientific and engineering challenges facing the development of quantum computers, considerable progress is being made toward applying the technology to commercial applications. In this article, we discuss the solutions that some companies are already building using quantum hardware. Framing these as examples of combinatorics problems, we illustrate their application in four industry verticals: cybersecurity, materials and pharmaceuticals, banking and finance, and advanced manufacturing. While quantum computers are not yet available at the scale needed to solve all of these combinatorics problems, we identify three types of near-term opportunities resulting from advances in quantum computing: quantum-safe encryption, material and drug discovery, and quantum-inspired algorithms.

177 sitasi en Medicine
DOAJ Open Access 2025
Leading Degree: A Metric for Model Performance Evaluation and Hyperparameter Tuning in Deep Learning-Based Side-Channel Analysis

Junfan Zhu, Jiqiang Lu

Side-channel analysis benefits a lot from deep learning techniques, which assist attackers in recovering the secret key with fewer attack traces than before, but it remains a problem to precisely measure deep learning model performance, so as to obtain a high-performance model. Commonly used evaluation metrics for deep learning like accuracy and precision cannot well meet the demand due to their deviation in side-channel analysis, and classical evaluation metrics for side-channel analysis like guessing entropy, success rate and TGE1 are not generic because they effectively evaluate model performance in only one of the two situations: whether models manage to recover the secret key with given attack traces or not, and not efficient because they need to be performed multiple times to counteract randomness. To attain an effective generic side-channel evaluation metric, we investigate the deterministic component of power consumption, find that the elements of score vector under a key follow a linearly transformed chi-square distribution approximately, and some wrong key hypotheses usually with top scores provide great assistance in model performance evaluation, and finally we propose a new metric called Leading Degree (LD) as well as its simplified version LD-simplified for measuring model performance, which offers similar accuracy but much better generality and efficiency compared with the classical side-channel benchmark metric TGE1, and offers similar generality and efficiency but significantly better accuracy compared with recently proposed sidechannel metrics like Label Correlation and Cross Entropy Ratio. LD/LD-simplified can be easily deployed in early stopping to avoid overfitting phenomena, and we build a bridge between LD/LD-simplified and TGE1, by observing an exponential relationship, which significantly shortens the estimating time for TGE1. At last, we apply LD as a reward function to better solve the reward function design problem in reinforcement learning-based model hyperparameter tuning of side-channel analysis, and obtain better CNN model architectures compared with the state-of-the-art models obtained by previous hyperparameter tuning methods.

Computer engineering. Computer hardware, Information technology
DOAJ Open Access 2024
Realization of Self‐Rectifying and Self‐Powered Resistive Random‐Access Memory Memristor Using [001]‐Oriented NaNbO3 Film Deposited on Sr2Nb3O10 Nanosheet at Low Temperatures

In-Su Kim, Bumjoo Kim, Seok-June Chae et al.

[001]‐oriented NaNbO3 films are deposited on Sr2Nb3O10/TiN/SiO2/Si substrates at 300 °C. The Sr2Nb3O10 nanosheets are used as a template to form crystalline NaNbO3 films at low temperature. The NaNbO3 films deposited on one Sr2Nb3O10 monolayer exhibit a bipolar switching curve due to the construction and destruction of oxygen vacancy filaments. Because the Sr2Nb3O10 monolayer does not act as an insulating layer, the film does not exhibit self‐rectifying properties. Self‐rectifying properties are observed in the NaNbO3 memristor, which forms on two Sr2Nb3O10 monolayers that act as tunnel barriers in the memristor. The memristor exhibits extensive rectification and on/off ratios of 48 and 15.7, respectively. Tunneling is the current conduction mechanism of the device in the low‐resistance state, and Schottky emission and tunneling are responsible for the conduction mechanism in the high‐resistance state at low and high voltages, respectively. The piezoelectric nanogenerator produced using the [001]‐oriented NaNbO3 film generates high voltage (1.8 V) and power (3.2 μW). Furthermore, endurance of the resistive random‐access memory and nonlinear transmission characteristics of the biological synapse are accomplished in the NaNbO3 memristor powered by the NaNbO3 nanogenerator. Therefore, the [001]‐oriented crystalline NaNbO3 film formed at 300 °C may be utilized for self‐rectifying and self‐powered artificial synapses.

Computer engineering. Computer hardware, Control engineering systems. Automatic machinery (General)
DOAJ Open Access 2024
Impacto del ajedrez en las habilidades socioemocionales: una revisión sistemática

Guissella Alexandra Zambrano Mera, Indira Lorena Caiza Hervas, Flora Doraliza Foyaín Palma et al.

Bajo un contexto donde se valoran la objetividad y la evidencia, es crucial fomentar competencias de investigación crítica, especialmente en el ámbito de la educación superior. Esta etapa educativa es fundamental para cultivar el pensamiento crítico y la investigación sistemática. En este sentido, el ajedrez se presenta como una herramienta educativa valiosa, capaz de impactar significativamente en el desarrollo de habilidades cognitivas y socioemocionales. Lejos de limitarse a ser un simple juego, el ajedrez estimula competencias vitales para el éxito tanto académico como personal, incluyendo la memoria, la atención y la resolución de problemas, así como habilidades socioemocionales como la empatía, la paciencia y la toma de decisiones en situaciones de presión. El propósito de esta revisión sistemática es compilar y analizar la evidencia relacionada con el efecto del ajedrez en las habilidades socioemocionales de los estudiantes. Siguiendo las pautas del PRISMA, se ha realizado un estudio exhaustivo de investigaciones empíricas y teóricas de las últimas dos décadas, ofreciendo una perspectiva amplia sobre cómo el ajedrez puede contribuir al desarrollo integral de estos estudiantes. Además, se examinan aspectos clave de la producción científica actual, incluyendo características de los estudios, marcos teóricos, metodologías y hallazgos, además de proporcionar recomendaciones para futuras investigaciones y aplicaciones educativas en este ámbito.

Computer engineering. Computer hardware
arXiv Open Access 2024
Resource-Adaptive Successive Doubling for Hyperparameter Optimization with Large Datasets on High-Performance Computing Systems

Marcel Aach, Rakesh Sarma, Helmut Neukirchen et al.

On High-Performance Computing (HPC) systems, several hyperparameter configurations can be evaluated in parallel to speed up the Hyperparameter Optimization (HPO) process. State-of-the-art HPO methods follow a bandit-based approach and build on top of successive halving, where the final performance of a combination is estimated based on a lower than fully trained fidelity performance metric and more promising combinations are assigned more resources over time. Frequently, the number of epochs is treated as a resource, letting more promising combinations train longer. Another option is to use the number of workers as a resource and directly allocate more workers to more promising configurations via data-parallel training. This article proposes a novel Resource-Adaptive Successive Doubling Algorithm (RASDA), which combines a resource-adaptive successive doubling scheme with the plain Asynchronous Successive Halving Algorithm (ASHA). Scalability of this approach is shown on up to 1,024 Graphics Processing Units (GPUs) on modern HPC systems. It is applied to different types of Neural Networks (NNs) and trained on large datasets from the Computer Vision (CV), Computational Fluid Dynamics (CFD), and Additive Manufacturing (AM) domains, where performing more than one full training run is usually infeasible. Empirical results show that RASDA outperforms ASHA by a factor of up to 1.9 with respect to the runtime. At the same time, the solution quality of final ASHA models is maintained or even surpassed by the implicit batch size scheduling of RASDA. With RASDA, systematic HPO is applied to a terabyte-scale scientific dataset for the first time in the literature, enabling efficient optimization of complex models on massive scientific data. The implementation of RASDA is available on https://github.com/olympiquemarcel/rasda

en cs.LG, cs.DC
arXiv Open Access 2024
Robust and optimal loading of general classical data into quantum computers

Xiao-Ming Zhang

As standard data loading processes, quantum state preparation and block-encoding are critical and necessary processes for quantum computing applications, including quantum machine learning, Hamiltonian simulation, and many others. Yet, existing protocols suffer from poor robustness under device imperfection, thus limiting their practicality for real-world applications. Here, this limitation is overcome based on a fanin process designed in a tree-like bucket-brigade architecture. It suppresses the error propagation between different branches, thus exponentially improving the robustness compared to existing depth-optimal methods. Moreover, the approach here simultaneously achieves the state-of-the-art fault-tolerant circuit depth, gate count, and STA. As an example of application, we show that for quantum simulation of geometrically local Hamiltonian, the code distance of each logic qubit can potentially be reduced exponentially using our technique. We believe that our technique can significantly enhance the power of quantum computing in the near-term and fault-tolerant regimes.

en quant-ph, cs.CC
S2 Open Access 2023
Autonomous error correction of a single logical qubit using two transmons

Ziqian Li, Tanay Roy, David Rodríguez Pérez et al.

Large-scale quantum computers will inevitably need quantum error correction to protect information against decoherence. Traditional error correction typically requires many qubits, along with high-efficiency error syndrome measurement and real-time feedback. Autonomous quantum error correction instead uses steady-state bath engineering to perform the correction in a hardware-efficient manner. In this work, we develop a new autonomous quantum error correction scheme that actively corrects single-photon loss and passively suppresses low-frequency dephasing, and we demonstrate an important experimental step towards its full implementation with transmons. Compared to uncorrected encoding, improvements are experimentally witnessed for the logical zero, one, and superposition states. Our results show the potential of implementing hardware-efficient autonomous quantum error correction to enhance the reliability of a transmon-based quantum information processor. Autonomous quantum error correction protects quantum systems against decoherence through engineered dissipation. Here the authors introduce the Star code, which actively corrects single-photon loss and passively suppresses low-frequency dephasing and implement it in a two-transmon device.

33 sitasi en Medicine, Physics
S2 Open Access 2021
Hamiltonian Engineering with Multicolor Drives for Fast Entangling Gates and Quantum Crosstalk Cancellation.

K. X. Wei, E. Magesan, I. Lauer et al.

Quantum computers built with superconducting artificial atoms already stretch the limits of their classical counterparts. While the lowest energy states of these artificial atoms serve as the qubit basis, the higher levels are responsible for both a host of attractive gate schemes as well as generating undesired interactions. In particular, when coupling these atoms to generate entanglement, the higher levels cause shifts in the computational levels that lead to unwanted ZZ quantum crosstalk. Here, we present a novel technique to manipulate the energy levels and mitigate this crosstalk with simultaneous off-resonant drives on coupled qubits. This breaks a fundamental deadlock between qubit-qubit coupling and crosstalk. In a fixed-frequency transmon architecture with strong coupling and crosstalk cancellation, additional cross-resonance drives enable a 90 ns CNOT with a gate error of (0.19±0.02)%, while a second set of off-resonant drives enables a novel CZ gate. Furthermore, we show a definitive improvement in circuit performance with crosstalk cancellation over seven qubits, demonstrating the scalability of the technique. This Letter paves the way for superconducting hardware with faster gates and greatly improved multiqubit circuit fidelities.

92 sitasi en Medicine, Physics
DOAJ Open Access 2023
A Reinforcement Learning Approach for Scheduling Problems with Improved Generalization through Order Swapping

Deepak Vivekanandan, Samuel Wirth, Patrick Karlbauer et al.

The scheduling of production resources (such as associating jobs to machines) plays a vital role for the manufacturing industry not only for saving energy, but also for increasing the overall efficiency. Among the different job scheduling problems, the Job Shop Scheduling Problem (JSSP) is addressed in this work. JSSP falls into the category of NP-hard Combinatorial Optimization Problem (COP), in which solving the problem through exhaustive search becomes unfeasible. Simple heuristics such as First-In, First-Out, Largest Processing Time First and metaheuristics such as taboo search are often adopted to solve the problem by truncating the search space. The viability of the methods becomes inefficient for large problem sizes as it is either far from the optimum or time consuming. In recent years, the research towards using Deep Reinforcement Learning (DRL) to solve COPs has gained interest and has shown promising results in terms of solution quality and computational efficiency. In this work, we provide an novel approach to solve the JSSP examining the objectives generalization and solution effectiveness using DRL. In particular, we employ the Proximal Policy Optimization (PPO) algorithm that adopts the policy-gradient paradigm that is found to perform well in the constrained dispatching of jobs. We incorporated a new method called Order Swapping Mechanism (OSM) in the environment to achieve better generalized learning of the problem. The performance of the presented approach is analyzed in depth by using a set of available benchmark instances and comparing our results with the work of other groups.

Computer engineering. Computer hardware

Halaman 39 dari 425723