Hasil untuk "Computer engineering. Computer hardware"

Menampilkan 20 dari ~8514608 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar

JSON API
S2 Open Access 2015
Trends in extreme learning machines: A review

Gao Huang, G. Huang, Shiji Song et al.

Extreme learning machine (ELM) has gained increasing interest from various research fields recently. In this review, we aim to report the current state of the theoretical research and practical advances on this subject. We first give an overview of ELM from the theoretical perspective, including the interpolation theory, universal approximation capability, and generalization ability. Then we focus on the various improvements made to ELM which further improve its stability, sparsity and accuracy under general or specific conditions. Apart from classification and regression, ELM has recently been extended for clustering, feature selection, representational learning and many other learning tasks. These newly emerging algorithms greatly expand the applications of ELM. From implementation aspect, hardware implementation and parallel computation techniques have substantially sped up the training of ELM, making it feasible for big data processing and real-time reasoning. Due to its remarkable efficiency, simplicity, and impressive generalization performance, ELM have been applied in a variety of domains, such as biomedical engineering, computer vision, system identification, and control and robotics. In this review, we try to provide a comprehensive view of these advances in ELM together with its future perspectives.

1692 sitasi en Medicine, Computer Science
S2 Open Access 2021
Emerging GaN technologies for power, RF, digital, and quantum computing applications: Recent advances and prospects

Koon Hoo Teo, Yuhao Zhang, N. Chowdhury et al.

GaN technology is not only gaining traction in power and RF electronics but is rapidly expanding into other application areas including digital and quantum computing electronics. This paper provides a glimpse of future GaN device technologies and advanced modeling approaches that can push the boundaries of these applications in terms of performance and reliability. While GaN power devices have recently been commercialized in the 15-900 V classes, new GaN devices are greatly desirable to explore both the higher-voltage and ultralow-voltage power applications. Moving into the RF domain, ultra-high frequency GaN devices are being used to implement digitized power amplifier circuits, and further advances using hardware-software co-design approach can be expected. On the horizon is the GaN CMOS technology, a key missing piece to realize the full-GaN platform with integrated digital, power and RF electronics technologies. Although currently a challenge, high-performance p-type GaN technology will be crucial to realize high-performance GaN CMOS circuits. Due to its excellent transport characteristics and ability to generate free carriers via polarization doping, GaN is expected to be an important technology for ultra-low temperature and quantum computing electronics. Finally, given the increasing cost of hardware prototyping of new devices and circuits, the use of high-fidelity device models and data-driven modeling approaches for technology-circuit co-design are projected to be the trends of the future. In this regard, physically inspired, mathematically robust, less computationally taxing, and predictive modeling approaches are indispensable. With all these and future efforts, we envision GaN to become the next Si for electronics. Journal of Applied Physics 2021 c © 2022 MERL. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Mitsubishi Electric Research Laboratories, Inc. 201 Broadway, Cambridge, Massachusetts 02139 Emerging GaN technologies for power, RF, digital, and quantum computing applications: recent advances and prospects Koon Hoo Teo,1, a) Yuhao Zhang,2, a) Nadim Chowdhury,3, a) Shaloo Rakheja,4, a) Rui Ma,1, a) Qingyun Xie,3 Eiji Yagyu,5 Koji Yamanaka,6 Kexin Li,4 and Tomas Palacios3 1)Mitsubishi Electric Research Laboratories (MERL), 201 Broadway, 8th floor, Cambridge, MA 02139, USA 2)Center for Power Electronics Systems, the Bradley Department of Electrical and Computer Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA 24060, USA 3)Microsystems Technology Laboratories, Massachusetts Institute of Technology (MIT), 77 Mass. Ave. Cambridge, MA 02139, USA 4)Department of Electrical and Computer Engineering, Holonyak Micro and Nanotechnology Laboratory, University of Illinois at Urbana-Champaign, 208 N Wright St, Urbana, IL 61801, USA 5)Mitsubishi Electric Corporation, Advanced Technology R&D Center, 8-1-1, Tsukaguchi-honmachi, Amagasaki City, 661-8661, Japan 6)Mitsubishi Electric Corporation, Information Technology R&D Center, 5-1-1, Ofuna, Kamakura City, 247-8501, Japan (Dated: 18 September 2021) GaN technology is not only gaining traction in power and RF electronics but is rapidly expanding into other application areas including digital and quantum computing electronics. This paper provides a glimpse of future GaN device technologies and advanced modeling approaches that can push the boundaries of these applications in terms of performance and reliability. While GaN power devices have recently been commercialized in the 15-900 V classes, new GaN devices are greatly desirable to explore both the higher-voltage and ultra-low-voltage power applications. Moving into the RF domain, ultra-high frequency GaN devices are being used to implement digitized power amplifier circuits, and further advances using hardware-software co-design approach can be expected. On the horizon is the GaN CMOS technology, a key missing piece to realize the full-GaN platform with integrated digital, power and RF electronics technologies. Although currently a challenge, high-performance p-type GaN technology will be crucial to realize high-performance GaN CMOS circuits. Due to its excellent transport characteristics and ability to generate free carriers via polarization doping, GaN is expected to be an important technology for ultra-low temperature and quantum computing electronics. Finally, given the increasing cost of hardware prototyping of new devices and circuits, the use of high-fidelity device models and data-driven modeling approaches for technology-circuit co-design are projected to be the trends of the future. In this regard, physically inspired, mathematically robust, less computationally taxing, and predictive modeling approaches are indispensable. With all these and future efforts, we envision GaN to become the next Si for electronics.

257 sitasi en
S2 Open Access 2021
Commercial applications of quantum computing

Francesco Bova, Avi Goldfarb, R. Melko

Despite the scientific and engineering challenges facing the development of quantum computers, considerable progress is being made toward applying the technology to commercial applications. In this article, we discuss the solutions that some companies are already building using quantum hardware. Framing these as examples of combinatorics problems, we illustrate their application in four industry verticals: cybersecurity, materials and pharmaceuticals, banking and finance, and advanced manufacturing. While quantum computers are not yet available at the scale needed to solve all of these combinatorics problems, we identify three types of near-term opportunities resulting from advances in quantum computing: quantum-safe encryption, material and drug discovery, and quantum-inspired algorithms.

177 sitasi en Medicine
DOAJ Open Access 2026
Designing High‐Entropy Alloys With Low Stacking Fault Energy Through Interpretable Machine Learning

Shuai Nie, Yixuan He, Haoxiang Liu et al.

ABSTRACT Low stacking fault energy (SFE) CoCrFeNiMn‐based high entropy alloys (HEAs) have garnered widespread attention due to their excellent mechanical properties. These outstanding mechanical properties result from multiple deformation mechanisms during tensile deformation, such as stacking faults, deformation twinning, and martensitic transformation. However, the vast and complex compositional space presents a significant challenge for the design of low SFE HEAs. To address this issue, this study developed an interpretable machine learning (ML) ensemble algorithm framework that integrates three high‐accuracy ML models (multilayer perceptron regressor, support vector regressor, extreme gradient boosting regressor, R2 > 0.9). In the alloy composition screening stage, the Valence Electron Concentration (VEC) and the proposed ML scoring parameter (Score = A*Mean + B*Std) were employed to constrain the phase composition and screen for low SFE alloy compositions. Ultimately, multiple No‐BCC phase CoCrFeNiMn‐based HEAs with twinning‐induced plasticity/transformation‐induced plasticity effects were successfully designed. To overcome the challenge of insufficient model accuracy in data‐driven design, correlation‐based and importance‐based feature selection methods were combined. This approach efficiently processed additional descriptors generated from atomic compositions, improving model accuracy by 13%. Furthermore, the Shapley additive explanation method revealed the influence of individual elements on the SFE, providing valuable guidance for designing low‐SFE HEAs.

Materials of engineering and construction. Mechanics of materials, Computer engineering. Computer hardware
DOAJ Open Access 2026
Area-Efficient Polynomial Multiplication Hardware Implementation for Lattice-based Cryptography

XIE Jiaxing, PU Jinwei, FANG Weitian, ZHENG Xin, XIONG Xiaoming

Lattice-based post-quantum cryptography algorithms demonstrate significant potential in public-key cryptography. A key performance bottleneck in hardware implementation is the computational complexity of polynomial multiplication. To address the problems of low area efficiency and memory mapping conflicts encountered in polynomial multiplication, this study proposes a polynomial multiplication structure based on Partial Number Theoretic Transform (PNTT) and a Coefficient Crossover Operation (CCO). First, the last round of the Number Theoretic Transform (NTT), coefficient multiplication, and the first round of the Inverse Number Theoretic Transform (INTT) are merged into a CCO, reducing two rounds of butterfly operations and 50% of the twiddle factor storage space; consequently, memory access overhead is lowered. Second, lightweight hardware is employed to implement modular addition, modular subtraction, division by two, and enhanced Barrett-based modular multiplication, effectively reducing the logical resource overhead. Simultaneously, the study designs a reconfigurable Processing Element (PE) array using pipeline and time-sharing multiplexing techniques, allowing each operation unit to be efficiently reconnected under different transformations. In addition, the study introduces coefficient grouping storage and special memory mapping methods in the memory mapping scheme. The efficient scheduling of data and twiddle factors is achieved by leveraging address-mapping rules, avoiding memory mapping conflicts, and achieving low-cost memory access. Finally, a First Input First Output (FIFO) structure is employed for data reorganization, which enhances data access efficiency. Experimental results show that the proposed polynomial multiplication structure reduces the Area-Time Product (ATP) of Slices and Digital Signal Processor (DSP) by over 21.7% and 61.1%, respectively, compared to existing works and has a higher area efficiency.

Computer engineering. Computer hardware, Computer software
arXiv Open Access 2026
QuantumX: an experience for the consolidation of Quantum Computing and Quantum Software Engineering as an emerging discipline

Juan M. Murillo, Ignacio García Rodríguez de Guzmán, Enrique Moguel et al.

The first edition of the QuantumX track, held within the XXIX Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2025), brought together leading Spanish research groups working at the intersection of Quantum Computing and Software Engineering. The event served as a pioneering forum to explore how principles of software quality, governance, testing, orchestration, and abstraction can be adapted to the quantum paradigm. The presented works spanned diverse areas (from quantum service engineering and hybrid architectures to quality models, circuit optimization, and quantum machine learning), reflecting the interdisciplinary nature and growing maturity of Quantum Computing and Quantum Software Engineering. The track also fostered community building and collaboration through the presentation of national and Ibero-American research networks such as RIPAISC and QSpain, and through dedicated networking sessions that encouraged joint initiatives. Beyond reporting on the event, this article provides a structured synthesis of the contributions presented at QuantumX, identifies common research themes and engineering concerns, and outlines a set of open challenges and future directions for the advancement of Quantum Software Engineering. This first QuantumX track established the foundation for a sustained research community and positioned Spain as an emerging contributor to the European and global quantum software ecosystem.

en cs.SE
S2 Open Access 2021
Prospects and applications of photonic neural networks

Chaoran Huang, V. Sorger, M. Miscuglio et al.

ABSTRACT Neural networks have enabled applications in artificial intelligence through machine learning, and neuromorphic computing. Software implementations of neural networks on conventional computers that have separate memory and processor (and that operate sequentially) are limited in speed and energy efficiency. Neuromorphic engineering aims to build processors in which hardware mimics neurons and synapses in the brain for distributed and parallel processing. Neuromorphic engineering enabled by photonics (optical physics) can offer sub-nanosecond latencies and high bandwidth with low energies to extend the domain of artificial intelligence and neuromorphic computing applications to machine learning acceleration, nonlinear programming, intelligent signal processing, etc. Photonic neural networks have been demonstrated on integrated platforms and free-space optics depending on the class of applications being targeted. Here, we discuss the prospects and demonstrated applications of these photonic neural networks. Graphical Abstract

165 sitasi en Computer Science, Physics
S2 Open Access 2024
Awareness and level of digital literacy among students receiving health-based education

Alp Aydınlar, Arda Mavi, Ece Kütükçü et al.

Being digitally literate allows health-based science students to access reliable, up-to-date information efficiently and expands the capacity for continuous learning. Digital literacy facilitates effective communication and collaboration among other healthcare providers. It helps to navigate the ethical implications of using digital technologies and aids the use of digital tools in managing healthcare processes. Our aim in this study is to determine the digital literacy level and awareness of our students receiving health-based education in our university and to pave the way for supporting the current curriculum with courses on digital literacy when necessary. Students from Acibadem University who were registered undergraduate education for at least four years of health-based education, School of Medicine, Nutrition and Dietetics, Nursing, Physiotherapy and Rehabilitation, Psychology, Biomedical Engineering, Molecular Biology, and Genetics were included. The questionnaire consisted of 24 queries evaluating digital literacy in 7 fields: software and multimedia, hardware and technical problem solving, network and communication/collaboration, ethics, security, artificial intelligence (A.I.), and interest/knowledge. Two student groups representing all departments were invited for interviews according to the Delphi method. The survey was completed by 476 students. Female students had less computer knowledge and previous coding education. Spearman correlation test showed that there were weak positive correlations between the years and the “software and multimedia,” “ethics,” “interest and knowledge” domains, and the average score. The students from Nursing scored lowest in the query after those from the Nutrition and Dietetics department. The highest scores were obtained by Biomedical Engineering students, followed by the School of Medicine. Participants scored the highest in “network” and “A.I.” and lowest in “interest-knowledge” domains. It is necessary to define the level of computer skills who start health-based education and shape the curriculum by determining which domains are weak. Creating an educational environment that fosters females’ digital knowledge is recommended. Elective courses across faculties may be offered to enable students to progress and discuss various digital literacy topics. The extent to which students benefit from the digital literacy-supported curriculum may be evaluated. Thus, health-based university students are encouraged to acquire the computer skills required by today’s clinical settings. This study was approved by Acıbadem University and Acıbadem Healthcare Institutions Medical Research Ethics Committee (ATADEK) (11 November 2022, ATADEK registration: 2022-17-138) All methods were carried out in accordance with relevant guidelines and regulations. Informed consent was obtained from the participants.

61 sitasi en Medicine
S2 Open Access 2020
Protecting a bosonic qubit with autonomous quantum error correction

Jeffrey M. Gertler, Brian Baker, Juliang Li et al.

To build a universal quantum computer from fragile physical qubits, effective implementation of quantum error correction (QEC)1 is an essential requirement and a central challenge. Existing demonstrations of QEC are based on an active schedule of error-syndrome measurements and adaptive recovery operations2,3,4,5,6,7 that are hardware intensive and prone to introducing and propagating errors. In principle, QEC can be realized autonomously and continuously by tailoring dissipation within the quantum system1,8,9,10,11,12,13,14, but so far it has remained challenging to achieve the specific form of dissipation required to counter the most prominent errors in a physical platform. Here we encode a logical qubit in Schrödinger cat-like multiphoton states15 of a superconducting cavity, and demonstrate a corrective dissipation process that stabilizes an error-syndrome operator: the photon number parity. Implemented with continuous-wave control fields only, this passive protocol protects the quantum information by autonomously correcting single-photon-loss errors and boosts the coherence time of the bosonic qubit by over a factor of two. Notably, QEC is realized in a modest hardware setup with neither high-fidelity readout nor fast digital feedback, in contrast to the technological sophistication required for prior QEC demonstrations. Compatible with additional phase-stabilization and fault-tolerant techniques16,17,18, our experiment suggests quantum dissipation engineering as a resource-efficient alternative or supplement to active QEC in future quantum computing architectures. A logical qubit encoded in multi-photon states of a superconducting cavity is protected with autonomous correction of certain quantum errors by tailoring the dissipation it is exposed to.

169 sitasi en Computer Science, Physics
DOAJ Open Access 2025
Exploring the future of privacy-preserving heart disease prediction: a fully homomorphic encryption-driven logistic regression approach

Vankamamidi S. Naresh, Sivaranjani Reddi

Abstract Homomorphic Encryption (HE) offers a revolutionary cryptographic approach to safeguarding privacy in machine learning (ML), especially in processing sensitive healthcare data. This study aims to address the critical issue of privacy-preserving heart disease prediction by developing a novel Homomorphic Encryption-Driven Logistic Regression (HELR) framework, leveraging the Cheon-Kim-Kim-Song (CKKS) encryption scheme. The framework was implemented using the TenSeal and Torch libraries and evaluated on encrypted heart disease datasets with varying polynomial degrees. The study’s design involved applying the HELR model to three healthcare datasets and comparing its performance with Support Vector Machines (SVM). The major findings revealed that the HELR model achieved high accuracy, within 1% to 3% of its non-HE counterpart, while maintaining competitive computational efficiency. Furthermore, the HELR framework demonstrated robust security against various privacy attacks, including poisoning, evasion, member inference, model inversion, and model extraction, at different ML stages. Notably, the HELR model outperformed SVM in terms of accuracy, showcasing its effectiveness for secure healthcare predictions. The results suggest that HE-enhanced models can offer secure, accurate predictions, paving the way for advancements in privacy-preserving healthcare analytics. However, the study identified limitations related to the computational overhead introduced by HE and the scalability of the model for large datasets. Future work will focus on optimizing encryption techniques and exploring parallel processing methods to address these challenges.

Computer engineering. Computer hardware, Information technology
arXiv Open Access 2025
Robustness Tokens: Towards Adversarial Robustness of Transformers

Brian Pulfer, Yury Belousov, Slava Voloshynovskiy

Recently, large pre-trained foundation models have become widely adopted by machine learning practitioners for a multitude of tasks. Given that such models are publicly available, relying on their use as backbone models for downstream tasks might result in high vulnerability to adversarial attacks crafted with the same public model. In this work, we propose Robustness Tokens, a novel approach specific to the transformer architecture that fine-tunes a few additional private tokens with low computational requirements instead of tuning model parameters as done in traditional adversarial training. We show that Robustness Tokens make Vision Transformer models significantly more robust to white-box adversarial attacks while also retaining the original downstream performances.

en cs.LG, cs.AI
arXiv Open Access 2025
A new metric for evaluating the performance and complexity of computer programs: A new approach to the traditional ways of measuring the complexity of algorithms and estimating running times

Rares Folea, Emil-Ioan Slusanschi

This paper presents a refined complexity calculus model: r-Complexity, a new asymptotic notation that offers better complexity feedback for similar programs than the traditional Bachmann-Landau notation, providing subtle insights even for algorithms that are part of the same conventional complexity class. The architecture-dependent metric represents an enhancement that provides better sensitivity with respect to discrete analysis.

arXiv Open Access 2025
A comprehensive review of sensor technologies, instrumentation, and signal processing solutions for low-power Internet of Things systems with mini-computing devices

Alexandros Gazis, Ioannis Papadongonas, Athanasios Andriopoulos et al.

This article provides a comprehensive overview of sensors commonly used in low-cost, low-power systems, focusing on key concepts such as Internet of Things (IoT), Big Data, and smart sensor technologies. It outlines the evolving roles of sensors, emphasizing their characteristics, technological advancements, and the transition toward "smart sensors" with integrated processing capabilities. The article also explores the growing importance of mini-computing devices in educational environments. These devices provide cost-effective and energy-efficient solutions for system monitoring, prototype validation, and real-world application development. By interfacing with wireless sensor networks and IoT systems, mini-computers enable students and researchers to design, test, and deploy sensor-based systems with minimal resource requirements. Furthermore, this article examines the most widely used sensors, detailing their properties and modes of operation to help readers understand how sensor systems function. The aim of this study is to provide an overview of the most suitable sensors for various applications by explaining their uses and operations in simple terms. This clarity will assist researchers in selecting the appropriate sensors for educational and research purposes or understanding why specific sensors were chosen, along with their capabilities and possible limitations. Ultimately, this research seeks to equip future engineers with the knowledge and tools needed to integrate cutting-edge sensor networks, IoT, and Big Data technologies into scalable, real-world solutions.

en eess.SP, cs.IT
arXiv Open Access 2025
Knowledge-Based Aerospace Engineering -- A Systematic Literature Review

Tim Wittenborg, Ildar Baimuratov, Ludvig Knöös Franzén et al.

The aerospace industry operates at the frontier of technological innovation while maintaining high standards regarding safety and reliability. In this environment, with an enormous potential for re-use and adaptation of existing solutions and methods, Knowledge-Based Engineering (KBE) has been applied for decades. The objective of this study is to identify and examine state-of-the-art knowledge management practices in the field of aerospace engineering. Our contributions include: 1) A SWARM-SLR of over 1,000 articles with qualitative analysis of 164 selected articles, supported by two aerospace engineering domain expert surveys. 2) A knowledge graph of over 700 knowledge-based aerospace engineering processes, software, and data, formalized in the interoperable Web Ontology Language (OWL) and mapped to Wikidata entries where possible. The knowledge graph is represented on the Open Research Knowledge Graph (ORKG), and an aerospace Wikibase, for reuse and continuation of structuring aerospace engineering knowledge exchange. 3) Our resulting intermediate and final artifacts of the knowledge synthesis, available as a Zenodo dataset. This review sets a precedent for structured, semantic-based approaches to managing aerospace engineering knowledge. By advancing these principles, research, and industry can achieve more efficient design processes, enhanced collaboration, and a stronger commitment to sustainable aviation.

en cs.CE
S2 Open Access 2017
A Leaky‐Integrate‐and‐Fire Neuron Analog Realized with a Mott Insulator

P. Stoliar, J. Tranchant, B. Corraze et al.

During the last half century, the tremendous development of computers based on von Neumann architecture has led to the revolution of the information technology. However, von Neumann computers are outperformed by the mammal brain in numerous data‐processing applications such as pattern recognition and data mining. Neuromorphic engineering aims to mimic brain‐like behavior through the implementation of artificial neural networks based on the combination of a large number of artificial neurons massively interconnected by an even larger number of artificial synapses. In order to effectively implement artificial neural networks directly in hardware, it is mandatory to develop artificial neurons and synapses. A promising advance has been made in recent years with the introduction of the components called memristors that might implement synaptic functions. In contrast, the advances in artificial neurons have consisted in the implementation of silicon‐based circuits. However, so far, a single‐component artificial neuron that will bring an improvement comparable to what memristors have brought to synapses is still missing. Here, a simple two‐terminal device is introduced, which can implement the basic functions leaky integrate and fire of spiking neurons. Remarkably, it has been found that it is realized by the behavior of strongly correlated narrow‐gap Mott insulators subject to electric pulsing.

243 sitasi en Materials Science
DOAJ Open Access 2024
Application‐Oriented Modeling of Soft Actuator Ionic Polymer–Metal Composites: A Review

Jingang Jiang, Chuan Lin, Shuainan Xu et al.

Compared to conventional actuators, the soft ionic polymer–metal composite (IPMC) actuator has significant advantages in specific applications, and the mathematical model of IPMC actuators is essential to comprehending and applying IPMCs. Due to the inherent characteristics of IPMCs and the impact of the manufacturing and measurement processes, it is challenging to developa reliable model. This article provides a comprehensive overview of the developments in IPMC actuator modeling. In particular, three types of models are examined and contrasted: the nonphysical identification model, the partial‐physical model, and the physical‐based model. In order to comprehend the current state of numerous IPMC actuator models, the characteristics, evolution, and functions of each type of model are discussed. Afterward, the evolution of the IPMC actuators’ applications is discussed. Finally, promising research directions for IPMC actuator models are identified that can more effectively facilitate the development of IPMC‐based devices.

Computer engineering. Computer hardware, Control engineering systems. Automatic machinery (General)
arXiv Open Access 2024
A Survey on Hardware Accelerators for Large Language Models

Christoforos Kachris

Large Language Models (LLMs) have emerged as powerful tools for natural language processing tasks, revolutionizing the field with their ability to understand and generate human-like text. As the demand for more sophisticated LLMs continues to grow, there is a pressing need to address the computational challenges associated with their scale and complexity. This paper presents a comprehensive survey on hardware accelerators designed to enhance the performance and energy efficiency of Large Language Models. By examining a diverse range of accelerators, including GPUs, FPGAs, and custom-designed architectures, we explore the landscape of hardware solutions tailored to meet the unique computational demands of LLMs. The survey encompasses an in-depth analysis of architecture, performance metrics, and energy efficiency considerations, providing valuable insights for researchers, engineers, and decision-makers aiming to optimize the deployment of LLMs in real-world applications.

en cs.AR, cs.CL
S2 Open Access 2023
Towards quantum-enabled cell-centric therapeutics

S. Basu, Jannis Born, Aritra Bose et al.

In recent years, there has been tremendous progress in the development of quantum computing hardware, algorithms and services leading to the expectation that in the near future quantum computers will be capable of performing simulations for natural science applications, operations research, and machine learning at scales mostly inaccessible to classical computers. Whereas the impact of quantum computing has already started to be recognized in fields such as cryptanalysis, natural science simulations, and optimization among others, very little is known about the full potential of quantum computing simulations and machine learning in the realm of healthcare and life science (HCLS). Herein, we discuss the transformational changes we expect from the use of quantum computation for HCLS research, more specifically in the field of cell-centric therapeutics. Moreover, we identify and elaborate open problems in cell engineering, tissue modeling, perturbation modeling, and bio-topology while discussing candidate quantum algorithms for research on these topics and their potential advantages over classical computational approaches.

20 sitasi en Physics, Biology

Halaman 40 dari 425731