Hasil untuk "Computer engineering. Computer hardware"

Menampilkan 20 dari ~8502229 hasil · dari DOAJ, CrossRef, Semantic Scholar, arXiv

JSON API
DOAJ Open Access 2025
Sustainability and Energy Efficiency in Administrative Processes: A Control Theory Approach with P-graph Optimization

Boglárka Eisinger, László Buics

As institutions seek to reduce their environmental impact, administrative processes must be optimized for energy and resource efficiency. This study integrates control theory with P-graph methodology to develop a structured framework for sustainable administrative workflows, focusing on university enrollment systems. P-graph-based optimization identifies minimum-energy pathways and optimal resource configurations, while Model Predictive Control (MPC) and nonlinear control enable real-time process adaptation under dynamic conditions. A Life Cycle Analysis (LCA) compares the carbon footprint of digital and paper-based workflows, evaluating IT infrastructure energy use versus traditional operations. Simulated control strategies support energy-efficient decision-making, highlighting best practices for emission reduction and operational flexibility. The result is a decision-support framework that embeds P-graph into a dynamic control context, guiding control strategy selection to minimize energy use and emissions. This scalable approach supports sustainability-oriented process management across public service domains.

Chemical engineering, Computer engineering. Computer hardware
DOAJ Open Access 2025
When Plans Meet Reality: the Tangle of Improvisation and Planning in Crisis Situations

Yassmine Rannene, Nelly Olivier-Maget, Samantha Lim et al.

Planning is crucial in crisis preparedness. Yet well-prepared plans often fail to provide an adequate response due to the unpredictability of crises. Consequently, responses often require improvisation, shaped by contingencies and time constraints. Ho wever, research on risks frequently puts planning and improvisation at odds. In this paper, we overcome the seemingly contradictory nature of planning and improvisation and explore how they intertwine in complex technological emergencies. Although organizations develop a wide range of plans, complete prediction of crises is out of reach. That's why we propose that improvisation and planning are complementary to strengthen the resilience of organizations in crisis management. To better understand improvisation, researchers have studied it at individual, group and organizational levels, focusing on its characteristics and dimensions. However, gaps remain in our understanding of the full impact of improvisation, how it interacts with existing crisis plans, and how it unfolds during crises. This article aims to fill these gaps by exploring the influence of improvised actions on the execution of existing crisis management plans.

Chemical engineering, Computer engineering. Computer hardware
DOAJ Open Access 2025
Análisis de ficheros de registros de accesos de servidores web Nginx y Apache

Dariel González Robinson, Yohandra Echavarria Castillo, Madelín Haro Pérez et al.

Este artículo se centra en el análisis de registros de acceso para obtener información valiosa sobre el tráfico de un sitio web, el comportamiento del usuario y posibles problemas de seguridad. Se desarrolló un script para limpiar y analizar los datos generados por sistemas web que utilizan Nginx o Apache. El script, escrito en Bash, permite la limpieza, análisis y visualización de trazas, identificando eventos significativos. El script puede analizar los registros de acceso de ambos servidores web, así como de otras tecnologías web que utilicen el mismo formato estándar para el almacenamiento de las trazas. Para la experimentación se utilizó el script sobre un conjunto de trazas generados por el sistema de gestión XABAL EXCRIBA. Los resultados obtenidos a partir del análisis de las trazas determinan que el sitio web EXCRIBA se utiliza principalmente para la consulta de información debido a la gran cantidad de peticiones GET. Además, los resultados muestran que el servicio es confiable y eficiente en la entrega de contenido o servicios debido a la cantidad de respuestas con código exitoso. Sin embargo, la presencia de redirecciones y errores sugiere que hay áreas de mejora en la optimización de rutas o enlaces y en la forma en que las solicitudes son manejadas por el servidor.

Computer engineering. Computer hardware
DOAJ Open Access 2025
Attention Distillation Contrastive Mutual Learning Model for COVID-19 Image Diagnosis

LÜ Jingqin, HU Lang, LIANG Weinan, LI Guangli, ZHANG Hongbin

COVID-19 is an illness caused by a strain of the novel coronavirus. Existing COVID-19 imaging diagnostic models face challenges such as the lack of high-quality samples and insufficient exploration of inter-sample relationships. This paper proposes a novel model called Attention Distillation Contrastive Mutual Learning (ADCML) for COVID-19 diagnosis, to address these two issues. First, a progressive data augmentation strategy is constructed, which includes AutoAugment and sample filtering, and the lack of quality samples is proactively addressed by expanding the number of images and ensuring their quality. Second, the ADCML framework is built, which employs attention distillation to motivate two heterogeneous networks to learn from each other the pathological knowledge concerned with their attention. The implicit contrastive relationships among the diverse samples are then fully mined to improve the discriminative ability of the extracted features. Finally, a new adaptive model-fusion module is designed to fully mine the complementarity between the heterogeneous networks and complete the COVID-19 image diagnosis. The proposed model is validated on three publicly available datasets-including Computed Tomography (CT) and X-ray images-with accuracies of 89.69%, 98.16%, and 98.91%; F1 values of 88.62%, 97.58%, and 98.47%; and Area Under the Curve (AUC) values of 88.95%, 97.77%, and 98.90%, respectively. These results show that the ADCML model outperforms the mainstream baselines and has strong robustness, and that progressive data augmentation, attention distillation, and contrastive mutual learning form a type of joint force that promotes the final classification performance.

Computer engineering. Computer hardware, Computer software
arXiv Open Access 2025
Programming with Pixels: Can Computer-Use Agents do Software Engineering?

Pranjal Aggarwal, Sean Welleck

Computer-use agents (CUAs) hold the promise of performing a wide variety of general tasks, but current evaluations have primarily focused on simple scenarios. It therefore remains unclear whether such generalist agents can automate more sophisticated and specialized work such as software engineering (SWE). To investigate this, we introduce $\texttt{Programming with Pixels}$ (PwP), the first comprehensive computer-use environment for software engineering, where agents visually control an IDE to perform diverse software engineering tasks. To enable holistic evaluation, we also introduce \texttt{PwP-Bench}, a benchmark of 15 existing and new software-engineering tasks spanning multiple modalities, programming languages, and skillsets. We perform an extensive evaluation of state-of-the-art open-weight and closed-weight CUAs and find that when interacting purely visually, they perform significantly worse than specialized coding agents. However, when the same CUAs are given direct access to just two APIs-file editing and bash operations-performance jumps, often reaching the levels of specialized agents despite having a task-agnostic design. Furthermore, when given access to additional IDE tools via text APIs, all models show further gains. Our analysis shows that current CUAs fall short mainly due to limited visual grounding and the inability to take full advantage of the rich environment, leaving clear room for future improvements.PwP establishes software engineering as a natural domain for benchmarking whether generalist computer-use agents can reach specialist-level performance on sophisticated tasks. Code and data released at https://programmingwithpixels.com

en cs.SE, cs.LG
arXiv Open Access 2025
BIMgent: Towards Autonomous Building Modeling via Computer-use Agents

Zihan Deng, Changyu Du, Stavros Nousias et al.

Existing computer-use agents primarily focus on general-purpose desktop automation tasks, with limited exploration of their application in highly specialized domains. In particular, the 3D building modeling process in the Architecture, Engineering, and Construction (AEC) sector involves open-ended design tasks and complex interaction patterns within Building Information Modeling (BIM) authoring software, which has yet to be thoroughly addressed by current studies. In this paper, we propose BIMgent, an agentic framework powered by multimodal large language models (LLMs), designed to enable autonomous building model authoring via graphical user interface (GUI) operations. BIMgent automates the architectural building modeling process, including multimodal input for conceptual design, planning of software-specific workflows, and efficient execution of the authoring GUI actions. We evaluate BIMgent on real-world building modeling tasks, including both text-based conceptual design generation and reconstruction from existing building design. The design quality achieved by BIMgent was found to be reasonable. Its operations achieved a 32% success rate, whereas all baseline models failed to complete the tasks (0% success rate). Results demonstrate that BIMgent effectively reduces manual workload while preserving design intent, highlighting its potential for practical deployment in real-world architectural modeling scenarios. Project page: https://tumcms.github.io/BIMgent.github.io/

en cs.AI
arXiv Open Access 2025
Hardware Efficient Accelerator for Spiking Transformer With Reconfigurable Parallel Time Step Computing

Bo-Yu Chen, Tian-Sheuan Chang

This paper introduces the first low-power hardware accelerator for Spiking Transformers, an emerging alternative to traditional artificial neural networks. By modifying the base Spikformer model to use IAND instead of residual addition, the model exclusively utilizes spike computation. The hardware employs a fully parallel tick-batching dataflow and a time-step reconfigurable neuron architecture, addressing the delay and power challenges of multi-timestep processing in spiking neural networks. This approach processes outputs from all time steps in parallel, reducing computation delay and eliminating membrane memory, thereby lowering energy consumption. The accelerator supports 3x3 and 1x1 convolutions and matrix operations through vectorized processing, meeting model requirements. Implemented in TSMC's 28nm process, it achieves 3.456 TSOPS (tera spike operations per second) with a power efficiency of 38.334 TSOPS/W at 500MHz, using 198.46K logic gates and 139.25KB of SRAM.

en cs.AR
arXiv Open Access 2024
NeuroNAS: Enhancing Efficiency of Neuromorphic In-Memory Computing for Intelligent Mobile Agents through Hardware-Aware Spiking Neural Architecture Search

Rachmad Vidya Wicaksana Putra, Muhammad Shafique

Intelligent mobile agents (e.g., UGVs and UAVs) typically demand low power/energy consumption when solving their machine learning (ML)-based tasks, since they are usually powered by portable batteries with limited capacity. A potential solution is employing neuromorphic computing with Spiking Neural Networks (SNNs), which leverages event-based computation to enable ultra-low power/energy ML algorithms. To maximize the performance efficiency of SNN inference, the In-Memory Computing (IMC)-based hardware accelerators with emerging device technologies (e.g., RRAM) can be employed. However, SNN models are typically developed without considering constraints from the application and the underlying IMC hardware, thereby hindering SNNs from reaching their full potential in performance and efficiency. To address this, we propose NeuroNAS, a novel framework for developing energyefficient neuromorphic IMC for intelligent mobile agents using hardware-aware spiking neural architecture search (NAS), i.e., by quickly finding an SNN architecture that offers high accuracy under the given constraints (e.g., memory, area, latency, and energy consumption). Its key steps include: optimizing SNN operations to enable efficient NAS, employing quantization to minimize the memory footprint, developing an SNN architecture that facilitates an effective learning, and devising a systematic hardware-aware search algorithm to meet the constraints. Compared to the state-of-the-art techniques, NeuroNAS quickly finds SNN architectures (with 8bit weight precision) that maintain high accuracy by up to 6.6x search time speed-ups, while achieving up to 92% area savings, 1.2x latency improvements, 84% energy savings across different datasets (i.e., CIFAR-10, CIFAR-100, and TinyImageNet-200); while the state-of-the-art fail to meet all constraints at once.

en cs.NE, cs.AI
arXiv Open Access 2024
ReCon: Reconfiguring Analog Rydberg Atom Quantum Computers for Quantum Generative Adversarial Networks

Nicholas S. DiBrita, Daniel Leeds, Yuqian Huo et al.

Quantum computing has shown theoretical promise of speedup in several machine learning tasks, including generative tasks using generative adversarial networks (GANs). While quantum computers have been implemented with different types of technologies, recently, analog Rydberg atom quantum computers have been demonstrated to have desirable properties such as reconfigurable qubit (quantum bit) positions and multi-qubit operations. To leverage the properties of this technology, we propose ReCon, the first work to implement quantum GANs on analog Rydberg atom quantum computers. Our evaluation using simulations and real-computer executions shows 33% better quality (measured using Frechet Inception Distance (FID)) in generated images than the state-of-the-art technique implemented on superconducting-qubit technology.

en quant-ph, cs.CV
DOAJ Open Access 2023
The Optimization of ?-Al<sub>2</sub>O<sub>3</sub> Production from Aluminium Foil Waste by Precipitation

Nessren M. Farrag, Yousef M. Ibrahim

The objective of this work encompasses the application of aluminium foil waste for preparing alumina (Al2O3), by precipitation. The response surface approach is developed for the produced ?-Al2O3 phase. Experiments were performed according to a 32 factorial design to evaluate the effects of hydrochloric acid (HCl) concentration, sodium hydroxide (NaOH) that was used as precipitation agent, as well as the calcination temperatures on the purity and yield. The effect of the two independent variables on the response variables was studied by response surface plots and contour plots generated by the Design-Expert software. The desirability function was used to optimize the response variables. The compatibility of the resulted purity and yield of the prepared alumina was further investigated and analysed by EDX test that results more than 97% for the three samples which calcined at three different temperatures. The study of the evolution of crystalline phases of obtained powders was accomplished through different studies. The morphology of the obtained alumina was tested by using SEM test. The observed responses taken agreed with the experimental values, and ?-Al2O3 phase was produced with less experimental trials, and a high yield was achieved with the concept of formulation by design.

Chemical engineering, Computer engineering. Computer hardware
DOAJ Open Access 2023
A viscoelastic-plastic model for skeletal structural systems with clearances

Mieczysław S. Kuczma

The paper is concerned with the mathematical modelling and numerical solution of unilateral problems for viscoelastic-plastic structural systems. A new material model is proposed in which the viscoelastic and plastic strains are governed by different constitutive laws. The model is restricted to isothermal quasistatic deformation processes under conditions of geometric linearity. The mechanical problem is posed in the format of piecewise linear plasticity and the unilateral contact conditions are described by means of the clearance function. The linear viscoelastic laws are integrated by a creep approach method, which allows for jump-discontinuities in the history of stress. For the evolution of plastic strains an implicit method is used. The problem is formulated and solved as a sequence of nested (mixed) linear complementarity problems. The question of existence and uniqueness of a solution to the problem is discussed. A numerical algorithm based on the pivotal transformations is devised and its stability is shown numerically. Results of numerical experiments for several illustrative examples of a beam/foundation system subjected to nonproportional loading histories are presented . The results clearly demonstrate the impact of the history of loading and the unilateral constraints upon the current state of the structural system.

Computer engineering. Computer hardware, Mechanics of engineering. Applied mechanics
DOAJ Open Access 2023
Canonical Correlation Analysis to Biomass CHONS Prediction

Federico Moretta, Vincenzo Del Duca, Giulia Bozzano et al.

Fermentation biomasses can be defined as a complex mixture of different natural components and microbes, having biodegradable and organic waste as the primary source. Its correct characterization is crucial to have proper processing in fermentative units. Firstly, proximate analysis is done to retrieve the content of specific compounds in the mixture, such as fat, proteins, and carbohydrates. However, this is often not enough to achieve the sufficient precision, since some low-concentration species are not easily found through this methodology (i.e., sulfate compounds, ethanol, caproic acid). Consequently, ultimate analysis is performed to evaluate the exact amount of every element in the mixture. For biomass-based compounds, atoms content can be synthesized in carbon, hydrogen, oxygen, nitrogen, and sulfur. The total content of these elements is also known as CHONS. From this, it is possible to derive the exact amount of the relative species in the biomass. However, the experimental procedure for its determination is rather time and budget-consuming. On the other hand, the amount of data collected in the literature, from both experimental and industrial analysis, can be exploited to build a numerical model, based on the multivariate statistical analysis and machine learning principles that predict the CHONS content for every type of biomass. In this work, a data-driven model has been developed to achieve this aim, having as input a set of relevant variables. Consequently, a dataset has been built to gather all these data. The multivariate statistical technique of Canonical Correlation Analysis (CCA) is used to find 'hidden' correlations and predict CHON content for 27 different biomass types. In future research, machine learning techniques will be applied to compare the results obtained.

Chemical engineering, Computer engineering. Computer hardware
DOAJ Open Access 2023
Improved U-Net Model for Road Crack Detection Based on Residual and Attention Mechanism

YU Haiyang, JING Peng, ZHANG Wentao, XIE Saifei, HUA Zhihua, SONG Caoyuan

Road cracks are an important part of road safety detection,and with the development of deep learning and computer vision,methods for extracting crack information in road images using deep learning methods are maturing.Existing deep learning road crack detection methods cannot extract small cracks and are affected by background factors,resulting in a decrease in detection accuracy.Based on the Convolutional Block Attention Module(CBAM) attention mechanism and the residual network,a deep learning network model for road crack detection incorporating the residual and attention mechanisms is established by improving the U-Net neural network model.The model embeds the channel attention mechanism and spatial attention mechanism in the up-sampling and down-sampling processes of the U-Net network,respectively.The CBAM attention mechanism performs both global average and global maximum mixed pooling on both channel and spatial dimensions,producing more effective global and local detail information.Meanwhile,integrating residual modules in the U-Net network effectively solves the problems of network gradient disappearance,gradient explosion,and network degradation,further improving the detection ability of road cracks.The experimental results show that compared with the U-Net original network,the F1 value of the U-Net network embedded with CBAM attention mechanism in the up-sampling and down-sampling processes to 81.02%,an increase of 13.76 percentage points.Further,compared with the network that only embeds the CBAM attention mechanism,the F1 value of the network that integrates residual modules and embeds the CBAM attention mechanism in the down-sampling processes reaches 85.82%,an increase of 4.8 percentage points.

Computer engineering. Computer hardware, Computer software
DOAJ Open Access 2022
Multi-Parameter Support with NTTs for NTRU and NTRU Prime on Cortex-M4

Erdem Alkim, Vincent Hwang, Bo-Yin Yang

We propose NTT implementations with each supporting at least one parameter of NTRU and one parameter of NTRU Prime. Our implementations are based on size-1440, size-1536, and size-1728 convolutions without algebraic assumptions on the target polynomial rings. We also propose several improvements for the NTT computation. Firstly, we introduce dedicated radix-(2, 3) butterflies combining Good–Thomas FFT and vector-radix FFT. In general, there are six dedicated radix-(2, 3) butterflies and they together support implicit permutations. Secondly, for odd prime radices, we show that the multiplications for one output can be replaced with additions/subtractions. We demonstrate the idea for radix-3 and show how to extend it to any odd prime. Our improvement also applies to radix-(2, 3) butterflies. Thirdly, we implement an incomplete version of Good–Thomas FFT for addressing potential code size issues. For NTRU, our polynomial multiplications outperform the state-of-the-art by 2.8%−10.3%. For NTRU Prime, our polynomial multiplications are slower than the state-of-the-art. However, the SotA exploits the specific structure of coefficient rings or polynomial moduli, while our NTT-based multiplications exploit neither and apply across different schemes. This reduces the engineering effort, including testing and verification.

Computer engineering. Computer hardware, Information technology
DOAJ Open Access 2022
Refined Edge Detection Method Based on Semantic Information

HUANG Sheng, RAN Haoshan

Edge detection is to accurately extract visually significant edge pixels from the image to obtain the edge information of the image.Traditional edge detection methods based on Full Convolution Network(FCN) usually require rough and fuzzy edge prediction.This paper proposes a refined edge detection method guided by semantic information.The learned image semantic information is transmitted to the edge detection subnetwork through the image segmentation subnetwork.The image semantic information is used to guide the edge detection subnetwork.A feature fusion module with attention mechanism and residual structure is also introduced to generate fine image edges to enhance feature fusion at different scales.On this basis, the cost function in image segmentation task is combined with the image edge detection task, to define a new model cost function, which is further trained to improve the quality of network edge detection.The experimental results on the BSDS500 dataset verify the effectiveness of the proposed method.The optimal dataset scale and image optimal scale attained by this method are 0.818 and 0.841, respectively.Compared with mainstream edge detection methods, such as HED and RCF, the proposed method can predict finer edge images with improved robustness.

Computer engineering. Computer hardware, Computer software
arXiv Open Access 2022
Summation Problem Revisited -- More Robust Computation

Vaclav Skala

Numerical data processing is a key task across different fields of computer technology use. However, even simple summation of values is not precise due to the floating point representation use. This paper presents a practical algorithm for summation of values convenient for medium and large data sets. The proposed algorithm is simple, easy to implement. Its computational complexity is O(N) in the contrary of the Exact Sign Summation Algorithm (ESSA) approach with O(N^2) run-time complexity. The proposed algorithm is especially convenient for cases when exponent data differ significantly and many small values are summed with higher values

en cs.DS, math.NA
arXiv Open Access 2022
MathPartner Computer Algebra

Gennadi Malaschonok

In this paper, we describe general characteristics of the MathPartner computer algebra system (CAS) and Mathpar programming language thereof. MathPartner can be used for scientific and engineering calculations, as well as in high schools and universities. It allows one to carry out both simple calculations (acting as a scientific calculator) and complex calculations with large-scale mathematical objects. Mathpar is a procedural language; it supports a large number of elementary and special functions, as well as matrix and polynomial operators. This service allows one to build function images and animate them. MathPartner also makes it possible to solve some symbolic computation problems on supercomputers with distributed memory. We highlight main differences of MathPartner from other CASs and describe the Mathpar language along with the user service provided.

en cs.SC
arXiv Open Access 2022
Hardware/Software Co-Programmable Framework for Computational SSDs to Accelerate Deep Learning Service on Large-Scale Graphs

Miryeong Kwon, Donghyun Gouk, Sangwon Lee et al.

Graph neural networks (GNNs) process large-scale graphs consisting of a hundred billion edges. In contrast to traditional deep learning, unique behaviors of the emerging GNNs are engaged with a large set of graphs and embedding data on storage, which exhibits complex and irregular preprocessing. We propose a novel deep learning framework on large graphs, HolisticGNN, that provides an easy-to-use, near-storage inference infrastructure for fast, energy-efficient GNN processing. To achieve the best end-to-end latency and high energy efficiency, HolisticGNN allows users to implement various GNN algorithms and directly executes them where the actual data exist in a holistic manner. It also enables RPC over PCIe such that the users can simply program GNNs through a graph semantic library without any knowledge of the underlying hardware or storage configurations. We fabricate HolisticGNN's hardware RTL and implement its software on an FPGA-based computational SSD (CSSD). Our empirical evaluations show that the inference time of HolisticGNN outperforms GNN inference services using high-performance modern GPUs by 7.1x while reducing energy consumption by 33.2x, on average.

en cs.AR, cs.LG

Halaman 21 dari 425112