Hasil untuk "Computer engineering. Computer hardware"

Menampilkan 20 dari ~8509835 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar

JSON API
arXiv Open Access 2026
Quantum Computing and Visualization Research Challenges and Opportunities

E. Wes Bethel, Roel Van Beeumen, Talita Perciano

Quantum computing (QC) has experienced rapid growth in recent years with the advent of robust programming environments, readily accessible software simulators and cloud-based QC hardware platforms, and growing interest in learning how to design useful methods that leverage this emerging technology for practical applications. From the perspective of the field of visualization, this article examines research challenges and opportunities along the path from initial feasibility to practical use of QC platforms applied to meaningful problems.

en quant-ph
arXiv Open Access 2026
Hardware-Level Governance of AI Compute: A Feasibility Taxonomy for Regulatory Compliance and Treaty Verification

Samar Ansari

The governance of frontier AI increasingly relies on controlling access to computational resources, yet the hardware-level mechanisms invoked by policy proposals remain largely unexamined from an engineering perspective. This paper bridges the gap between AI governance and computer engineering by proposing a taxonomy of 20 hardware-level governance mechanisms, organised by function (monitoring, verification, enforcement) and assessed for technical feasibility on a four-point scale from currently deployable to speculative. For each mechanism, we provide a technical description, a feasibility rating, and an identification of adversarial vulnerabilities. We map the taxonomy onto four governance scenarios: domestic regulation, bilateral agreements, multilateral treaty verification, and industry self-regulation. Our analysis reveals a structural mismatch: the mechanisms most needed for treaty verification, including on-chip compute metering, cryptographic proof-of-training, and hardware-embedded enforcement, are also the least mature. We assess principal threats to compute-based governance, including algorithmic efficiency gains, distributed training methods, and sovereignty concerns. We identify a temporal constraint: the window during which semiconductor manufacturing concentration makes hardware-level governance implementable is narrowing, while R&D timelines for critical mechanisms span years. We present an adversary-tiered threat analysis distinguishing commercial, non-state, and nation-state actors, arguing the appropriate security standard is tamper-evident assurance analogous to IAEA verification rather than absolute tamper-proofing. The taxonomy, feasibility classification, and mechanism-to-scenario mapping provide a technical foundation for policymakers and identify the R&D investments required before hardware-level governance can support verifiable international agreements.

en cs.CR, cs.CY
DOAJ Open Access 2025
GSMT: An explainable semi-supervised multi-label method based on Gower distance

José Carlos Mondragón, Andres Eduardo Gutierrez-Rodríguez, Victor Adrián Sosa Hernández

The financial, health, and education sectors produce vast amounts of data daily. Labeling entries such as assets, patients, and students is both costly and complex due to the evolution of databases into multi-label settings. Handling real-world data requires automatic labeling to circumvent slow manual procedures and explanations for compliance with regulations. In this work, we introduce GSMT, an inductive Explainable Semi-Supervised Multi-Label Random Forest Method based on Gower Distance, which uses supervised and unsupervised data to provide a non-linear solution for mainly tabular multi-label datasets with fully unknown label vectors. GSMT splits the dataset using multi-dimensional manifolds, completes missing label information and inductively predicts new observations while achieving explainability. We demonstrate state-of-the-art performance across Micro F1 Score, AUPRC, AUROC, and Label Rank Average Precision in a study involving 20 numerical and 5 mostly categorical datasets with five missing data ratios. By leveraging unsupervised information on top of numerical and categorical data, GSMT outputs the pattern rules annotated with performance measures, explanations on attribute and label space as well as an inductive model capable of predicting multi-label observations.

Computer engineering. Computer hardware, Electronic computers. Computer science
DOAJ Open Access 2025
From ideal to practical: Heterogeneity of student-generated variant lists highlights hidden reproducibility gaps.

Rumeysa Aslıhan Ertürk, Abdullah Asım Emül, Büşra Nur Darendeli-Kiraz et al.

Next-generation sequencing (NGS) technologies offer detailed and inexpensive identification of the genetic structure of living organisms. The massive data volume necessitates the utilization of advanced computational resources for analyses. However, the rapid accumulation of data and the urgent need for analysis tools have caused the development of imperfect software solutions. Given their immense potential in clinical applications and the recent reproducibility crisis discussions in science and technology, these tools must be thoroughly examined. Typically, NGS data analysis tools are benchmarked under homogeneous conditions, with well-trained personnel and ideal hardware and data environments. However, in the real world, these analyses are done under heterogeneous conditions in terms of computing environments and experience levels. This difference is mostly overlooked, therefore studies that examine NGS workflows generated under various conditions would be highly valuable. Moreover, a detailed assessment of the difficulties faced by the trainees would allow for improved educational programs for better NGS analysis training. Considering these needs, we designed an elective undergraduate bioinformatics course project for computer engineering students at Istanbul Technical University. Students were tasked to perform and compare 12 different somatic variant calling pipelines on the recently published SEQC2 dataset. Upon examining the results, we have realized that despite seeming correct, the final variant lists created by different student groups display a high level of heterogeneity. Notably, the operating systems and installation methods were the most influential factors in variant-calling performance. Here, we present detailed evaluations of our case study and provide insights for better bioinformatics training.

Biology (General)
DOAJ Open Access 2025
Probabilistic Infrastructure Failure Cost Analysis Integrating with Equity Cost Using the Reliability Analysis

Yasaman Norouzi, Seyed Hooman Ghasemi, Mohammad Jalayer

Quantifying infrastructure failure costs is pivotal for fostering target reliability and advancing equitable outcomes. Direct and indirect failure cost analyses must account for equity parameters such as fatalities, accessibility, and the fair distribution of benefits and burdens, particularly in transportation policies facing disruption. This paper critically examines existing methods for assessing the indirect costs of infrastructure failure through an equity lens. It synthesizes these approaches into a unified framework of probabilistic failure cost indices. Addressing notable gaps and disparities in the literature, we introduce a novel probabilistic-based metric designed to measure infrastructural costs comprehensively. Our main contribution is developing an inclusive framework that employs a detailed probabilistic formulation, capturing the interactions among key equity agents, including communities, hazards, infrastructure elements, and regulatory bodies. This formulation explicitly considers factors such as accessibility and exposure. Furthermore, we explore Agent-Based Functionality Modeling to achieve a multidimensional understanding of infrastructure failure costs. By analyzing past case studies and user data, we demonstrate the disproportionate impacts of infrastructure disruptions on marginalized communities. We also propose a new set of limit state functions (LSFs) tailored to operational variables, enabling the quantification of infrastructure operational levels by incorporating utility functionality assessments and failure-associated costs. Our framework dynamically integrates system functionality and utility uncertainties, providing a comprehensive and equitable tool for evaluating, predicting, and enhancing infrastructural resilience.

Computer engineering. Computer hardware
DOAJ Open Access 2025
Small Object Detection Algorithm for Aerial Photography Based on Improved YOLOv3

XI Qi, WANG Mingjie, WEI Jinghe, ZHAO Wei

This study presents an improved You Only Look Once version 3 (YOLOv3) algorithm for small object detection, to address problems such as low detection precision for small objects, missed detection, and false detection in the detection process. First, in terms of network structure, the feature extraction capability of the backbone network is improved by using DenseNet-121, with a Densely Connected Network (DenseNet), to replace the original Darknet-53 network as its basic network. Simultaneously, the convolution kernel size is modified to further reduce the loss of feature map information, to enhance the robustness of the detection model against small objects. A fourth feature detection layer with a size of 104×104 pixel is added. Second, the bilinear interpolation method is used to replace the original nearest neighbor interpolation method for upsampling operations, to solve the serious feature loss problem in most detection algorithms. Finally, in terms of the loss function, Generalized Intersection over Union (GIoU) is used instead of Intersection over Union (IoU) to calculate the loss value of the boundary frame, and the Focal Loss function is introduced as the confidence loss function of the boundary frame. Experimental results show that the mean Average Precision (mAP) of the improved algorithm on the VisDrone2019 dataset is 63.3%, which is 13.2 percentage points higher than that of the original YOLOv3 detection model, and 52 frame/s on a GTX 1080 Ti device. The improved algorithm has good detection performance for small objects.

Computer engineering. Computer hardware, Computer software
arXiv Open Access 2025
Computational Verification of the Buratti--Horak--Rosa Conjecture for Small Integers and Inductive Approaches

Ranjan N Naik

This paper presents a comprehensive computational approach to verify and inductively construct Hamiltonian paths for the Buratti--Horak--Rosa (BHR) Conjecture. The conjecture posits that for any multiset $L$ of $p-1$ positive integers not exceeding $\lfloor p/2 \rfloor$, there exists a Hamiltonian path in the complete graph $K_p$ with vertex-set $\{0, 1, \dots, p-1\}$ whose edge lengths (under the cyclic metric) match $L$, if and only if for every divisor $d$ of $p$, the number of multiples of $d$ appearing in $L$ is at most $p - d$. Building upon prior computational work by Mariusz Meszka, which verified the conjecture for all primes up to $p=23$, our Python program extends this verification significantly. We approach the problem by systematically generating frequency partitions (FPs) of edge lengths and employing a recursive backtracking algorithm. We report successful computational verification for all frequency partitions for integers $p < 32$, specifically presenting results for $p=31$ and a composite $p=26$. For the composite number $p=30$, the Python code took approximately 11 hours to verify on a Lenovo laptop. For $p=16$, $167,898$ valid multisets were processed, taking around 20 hours on Google Colab Pro+. Furthermore, we introduce and implement two constructive, inductive strategies for building Hamiltonian paths: (1) increasing the multiplicity of an existing edge length, and (2) adding a new edge length. These methods, supported by a reuse-insertion heuristic and backtracking search, demonstrate successful constructions for evolving FPs up to $p=40$. Through these empirical tests and performance metrics, we provide strong computational evidence for the validity of the BHR conjecture within the scope tested, and outline the scalability of our approach for higher integer values.

en cs.DM, cs.DS
arXiv Open Access 2025
Prompt Engineering Guidelines for Using Large Language Models in Requirements Engineering

Krishna Ronanki, Simon Arvidsson, Johan Axell

The rapid emergence of generative AI models like Large Language Models (LLMs) has demonstrated its utility across various activities, including within Requirements Engineering (RE). Ensuring the quality and accuracy of LLM-generated output is critical, with prompt engineering serving as a key technique to guide model responses. However, existing literature provides limited guidance on how prompt engineering can be leveraged, specifically for RE activities. The objective of this study is to explore the applicability of existing prompt engineering guidelines for the effective usage of LLMs within RE. To achieve this goal, we began by conducting a systematic review of primary literature to compile a non-exhaustive list of prompt engineering guidelines. Then, we conducted interviews with RE experts to present the extracted guidelines and gain insights on the advantages and limitations of their application within RE. Our literature review indicates a shortage of prompt engineering guidelines for domain-specific activities, specifically for RE. Our proposed mapping contributes to addressing this shortage. We conclude our study by identifying an important future line of research within this field.

en cs.SE
arXiv Open Access 2025
A Mixed User-Centered Approach to Enable Augmented Intelligence in Intelligent Tutoring Systems: The Case of MathAIde app

Guilherme Guerino, Luiz Rodrigues, Luana Bianchini et al.

This study explores the integration of Augmented Intelligence (AuI) in Intelligent Tutoring Systems (ITS) to address challenges in Artificial Intelligence in Education (AIED), including teacher involvement, AI reliability, and resource accessibility. We present MathAIde, an ITS that uses computer vision and AI to correct mathematics exercises from student work photos and provide feedback. The system was designed through a collaborative process involving brainstorming with teachers, high-fidelity prototyping, A/B testing, and a real-world case study. Findings emphasize the importance of a teacher-centered, user-driven approach, where AI suggests remediation alternatives while teachers retain decision-making. Results highlight efficiency, usability, and adoption potential in classroom contexts, particularly in resource-limited environments. The study contributes practical insights into designing ITSs that balanceuser needs and technological feasibility, while advancing AIED research by demonstrating the effectiveness of a mixed-methods, user-centered approach to implementing AuI in educational technologies.

en cs.HC, cs.AI
CrossRef Open Access 2024
Design and Modeling of Hardware Kit for QKD Education of Engineering Students and Communication Engineers

Vladimir Faerman, Alexander Olegovich Terekhin, Dmitriy Bragin et al.

The paper discusses the design and modeling of a simple and illustrative hardware kit for teaching the basics of quantum cryptography to engineering students. The novel solution differs from those already on the market in that it is focused on familiarising trainees with the physical principles of quantum key distribution, as well as with the basics of mathematical formalism in quantum mechanics. This is achieved by using a minimally sufficient set of optical elements with a simple mathematical description in the Jones formalism. This composition of the kit is targeted mostly at engineering students and does not require advanced training in physics as a prerequisite. The configurable architecture of the hardware educational kit contributes to the deeper involvement of students. By independently changing the modular configuration of the system, students can conduct experiments that were not directly provided by the developers. For instance, students can assess the impact of choosing the basic settings of phase retarders on the course of a man-in-the-middle attack. The proposed amount of mathematical and informational support, as well as ready-made formal models, is sufficient to reasonably put forward hypotheses for experimental verification and interpret the obtained empirical data. At this moment, the hardware kit is being replicated, distributed and successfully applied at universities and companies in the communications industry. The accumulated experience of educational use testifies to the high efficiency of the kit as a tool for basic QKD training for people without prior knowledge of quantum mechanics. A discrete event model simulating the operation of the hardware kit is implemented in the non-commercial modeling software CPN Tools and is openly distributed.

arXiv Open Access 2024
Lessons on Datasets and Paradigms in Machine Learning for Symbolic Computation: A Case Study on CAD

Tereso del Río, Matthew England

Symbolic Computation algorithms and their implementation in computer algebra systems often contain choices which do not affect the correctness of the output but can significantly impact the resources required: such choices can benefit from having them made separately for each problem via a machine learning model. This study reports lessons on such use of machine learning in symbolic computation, in particular on the importance of analysing datasets prior to machine learning and on the different machine learning paradigms that may be utilised. We present results for a particular case study, the selection of variable ordering for cylindrical algebraic decomposition, but expect that the lessons learned are applicable to other decisions in symbolic computation. We utilise an existing dataset of examples derived from applications which was found to be imbalanced with respect to the variable ordering decision. We introduce an augmentation technique for polynomial systems problems that allows us to balance and further augment the dataset, improving the machine learning results by 28\% and 38\% on average, respectively. We then demonstrate how the existing machine learning methodology used for the problem $-$ classification $-$ might be recast into the regression paradigm. While this does not have a radical change on the performance, it does widen the scope in which the methodology can be applied to make choices.

en cs.SC, cs.LG
arXiv Open Access 2024
Computing the QRPA Level Density with the Finite Amplitude Method

Antonio Bjelčić, Nicolas Schunck

We describe a new algorithm to calculate the vibrational nuclear level density of an atomic nucleus. Fictitious perturbation operators that probe the response of the system are generated by drawing their matrix elements from some probability distribution function. We use the Finite Amplitude Method to explicitly compute the response for each such sample. With the help of the Kernel Polynomial Method, we build an estimator of the vibrational level density and provide the upper bound of the relative error in the limit of infinitely many random samples. The new algorithm can give accurate estimates of the vibrational level density. Since it is based on drawing multiple samples of perturbation operators, its computational implementation is naturally parallel and scales like the number of available processing units.

en nucl-th
arXiv Open Access 2024
Incremental computation of the set of period sets

Eric Rivals

Overlaps between words are crucial in many areas of computer science, such as code design, stringology, and bioinformatics. A self overlapping word is characterized by its periods and borders. A period of a word $u$ is the starting position of a suffix of $u$ that is also a prefix $u$, and such a suffix is called a border. Each word of length, say $n>0$, has a set of periods, but not all combinations of integers are sets of periods. Computing the period set of a word $u$ takes linear time in the length of $u$. We address the question of computing, the set, denoted $Γ_n$, of all period sets of words of length $n$. Although period sets have been characterized, there is no formula to compute the cardinality of $Γ_n$ (which is exponential in $n$), and the known dynamic programming algorithm to enumerate $Γ_n$ suffers from its space complexity. We present an incremental approach to compute $Γ_n$ from $Γ_{n-1}$, which reduces the space complexity, and then a constructive certification algorithm useful for verification purposes. The incremental approach defines a parental relation between sets in $Γ_{n-1}$ and $Γ_n$, enabling one to investigate the dynamics of period sets, and their intriguing statistical properties. Moreover, the period set of a word $u$ is the key for computing the absence probability of $u$ in random texts. Thus, knowing $Γ_n$ is useful to assess the significance of word statistics, such as the number of missing words in a random text.

DOAJ Open Access 2023
La utilización de las nuevas tecnologías en la asignatura inglés

Vanessa Mariuxi García Macías, Monica Annabella Mejia Avellan, Letty Jacqueline Saltos Rodríguez et al.

El aprendizaje del inglés, por su importancia desde el punto de vista cultural y su utilidad en la materialización de las más diversas relaciones que se establecen a nivel mundial, es una prioridad, por ello el presente artículo tiene como objetivo profundizar conceptualmente en la utilización de las tecnologías y las clases online, para desarrollar una propuesta que contribuya de una forma amena, motivadora y actualizada a la comunicación en idioma inglés. Fueron utilizados diferentes métodos, como el análisis, la síntesis, la revisión de documentos, observación, entrevistas, que permitieron arribar a conclusiones como la necesidad de estructuración de un proceso que atienda al aprendizaje del idioma inglés desde una perspectiva integral, con el soporte de las nuevas tecnologías y las clases online, para lograr una lógica en la enseñanza-aprendizaje de la expresión oral y al contexto, ideal para recibir y ofrecer ayudas y transitar gradualmente hacia una independencia lingüística y comunicativa.

Computer engineering. Computer hardware
arXiv Open Access 2023
KyberMat: Efficient Accelerator for Matrix-Vector Polynomial Multiplication in CRYSTALS-Kyber Scheme via NTT and Polyphase Decomposition

Weihang Tan, Yingjie Lao, Keshab K. Parhi

CRYSTAL-Kyber (Kyber) is one of the post-quantum cryptography (PQC) key-encapsulation mechanism (KEM) schemes selected during the standardization process. This paper addresses optimization for Kyber architecture with respect to latency and throughput constraints. Specifically, matrix-vector multiplication and number theoretic transform (NTT)-based polynomial multiplication are critical operations and bottlenecks that require optimization. To address this challenge, we propose an algorithm and hardware co-design approach to systematically optimize matrix-vector multiplication and NTT-based polynomial multiplication by employing a novel sub-structure sharing technique in order to reduce computational complexity, i.e., the number of modular multiplications and modular additions/subtractions consumed. The sub-structure sharing approach is inspired by prior fast parallel approaches based on polyphase decomposition. The proposed efficient feed-forward architecture achieves high speed, low latency, and full utilization of all hardware components, which can significantly enhance the overall efficiency of the Kyber scheme. The FPGA implementation results show that our proposed design, using the fast two-parallel structure, leads to an approximate reduction of 90% in execution time, along with a 66 times improvement in throughput performance.

en cs.CR, cs.AR
DOAJ Open Access 2022
High Purity Hydrogen from Liquid NH3 - Proposal and Evaluation of a Process Chain

Kouessan Aziaba, Barbara D. Weiss, Viktoria Illyes et al.

This study proposes a process chain to gain high purity hydrogen from liquid ammonia. The utilization of the stored hydrogen requires the endothermic decomposition of ammonia2 NH3 ? N2 +3 H2(1)and the subsequent purification of H2. A process model from liquid NH3 to high purity hydrogen was developed. The process model includes the reaction kinetics for the catalytic decomposition of NH3 using a catalyst, such as Ni-Pt/Al2O3, and the necessary purification steps. Based on the simulation, a final process chain is proposed. Finally, heat integration calculations were performed to optimize the energy efficiency of the process. The application of a polyimide membrane system is proposed. The performed calculations show that using membrane separation, a H2 purity of around 97 wt% can be achieved. For a final NH3 content of < 1 ppm, the study found acidic or adsorptive removal of remaining NH3 necessary even for high decomposition conversion rates. To achieve even higher H2 purity, the application of an additional pressure swing absorption separation is proposed. This application can ensure H2 purities of > 99 wt% suitable for PEM fuel cells.

Chemical engineering, Computer engineering. Computer hardware
DOAJ Open Access 2022
Investigation on the Seismic Performance of Joints with Different Shapes in Bridge Columns

Saleh Salehi Fereidouni, Xiuli Du

This research is aimed to study the influence of multiform column angles on bridge columns and cap beam joints. Based on analytical results, changing angles from 0 to 10 didn’t show considerable impact on displacement and acceleration of bridge models. Furthermore, The joints of cap beam didn’t play a key role by different stress and strain. Beside this, vital assesses have seen around cap beam to column joints. The analytical data indicates that significant place is lower corner of joints through strengthening. It can guide to obviate reinforcing bars consumption at the cap beam or columns, which are using many materials and spending extra money. The software used in this study was Abacus. The model was a two column bridge bent that has one pedestal. The bridge was first designed in CSI Bridge software based on the AASHTO 2007 bylaw, then re-modeled in Abacus software, and finally the effect of 7 earthquake accelerograms in 3 modes of the pedestal angle in the cap-column beam node were analyzed.

Computer engineering. Computer hardware
arXiv Open Access 2022
Physical Computing for Materials Acceleration Platforms

Erik Peterson, Alexander Lavin

A ''technology lottery'' describes a research idea or technology succeeding over others because it is suited to the available software and hardware, not necessarily because it is superior to alternative directions--examples abound, from the synergies of deep learning and GPUs to the disconnect of urban design and autonomous vehicles. The nascent field of Self-Driving Laboratories (SDL), particularly those implemented as Materials Acceleration Platforms (MAPs), is at risk of an analogous pitfall: the next logical step for building MAPs is to take existing lab equipment and workflows and mix in some AI and automation. In this whitepaper, we argue that the same simulation and AI tools that will accelerate the search for new materials, as part of the MAPs research program, also make possible the design of fundamentally new computing mediums. We need not be constrained by existing biases in science, mechatronics, and general-purpose computing, but rather we can pursue new vectors of engineering physics with advances in cyber-physical learning and closed-loop, self-optimizing systems. Here we outline a simulation-based MAP program to design computers that use physics itself to solve optimization problems. Such systems mitigate the hardware-software-substrate-user information losses present in every other class of MAPs and they perfect alignment between computing problems and computing mediums eliminating any technology lottery. We offer concrete steps toward early ''Physical Computing (PC) -MAP'' advances and the longer term cyber-physical R&D which we expect to introduce a new era of innovative collaboration between materials researchers and computer scientists.

en cs.AI, cs.AR
arXiv Open Access 2022
Accelerating Simulation of Quantum Circuits under Noise via Computational Reuse

Meng Wang, Swamit Tannu, Prashant J. Nair

To realize the full potential of quantum computers, we must mitigate qubit errors by developing noise-aware algorithms, compilers, and architectures. Thus, simulating quantum programs on high-performance computing (HPC) systems with different noise models is a de facto tool researchers use. Unfortunately, noisy simulators iteratively execute a similar circuit for thousands of trials, thereby incurring significant performance overheads. To address this, we propose a noisy simulation technique called Tree-Based Quantum Circuit Simulation (TQSim). TQSim exploits the reusability of intermediate results during the noisy simulation, reducing computation. TQSim dynamically partitions a circuit into several subcircuits. It then reuses the intermediate results from these subcircuits during computation. Compared to a noisy Qulacs-based baseline simulator, TQSim achieves a speedup of up to 3.89x for noisy simulations. TQSim is designed to be efficient with multi-node setups while also maintaining tight fidelity bounds.

en quant-ph, cs.ET

Halaman 31 dari 425492