Hasil untuk "Computer engineering. Computer hardware"

Menampilkan 20 dari ~8224782 hasil · dari CrossRef, DOAJ, arXiv

JSON API
arXiv Open Access 2026
Joint Hardware-Workload Co-Optimization for In-Memory Computing Accelerators

Olga Krestinskaya, Mohammed E. Fouda, Ahmed Eltawil et al.

Software-hardware co-design is essential for optimizing in-memory computing (IMC) hardware accelerators for neural networks. However, most existing optimization frameworks target a single workload, leading to highly specialized hardware designs that do not generalize well across models and applications. In contrast, practical deployment scenarios require a single IMC platform that can efficiently support multiple neural network workloads. This work presents a joint hardware-workload co-optimization framework based on an optimized evolutionary algorithm for designing generalized IMC accelerator architectures. By explicitly capturing cross-workload trade-offs rather than optimizing for a single model, the proposed approach significantly reduces the performance gap between workload-specific and generalized IMC designs. The framework is evaluated on both RRAM- and SRAM-based IMC architectures, demonstrating strong robustness and adaptability across diverse design scenarios. Compared to baseline methods, the optimized designs achieve energy-delay-area product (EDAP) reductions of up to 76.2% and 95.5% when optimizing across a small set (4 workloads) and a large set (9 workloads), respectively. The source code of the framework is available at https://github.com/OlgaKrestinskaya/JointHardwareWorkloadOptimizationIMC.

en cs.AR, cs.AI
DOAJ Open Access 2025
Effective Cybersecurity Risk Assessment Approach for Integrating in Process Safety Management

Masayuki Tanabe, Atsumi Miyake

Cyberattacks targeting the process industry have become increasingly prevalent in recent years. The ISA TR84.00.09 standard and the CCPS guidelines propose methodologies for conducting process risk assessments against cyberattacks on process facilities, such as attacks on the Basic Process Control System (BPCS) and the Safety Instrumented System (SIS), to ensure robust functional requirement management throughout the plant lifecycle. However, hazard identification and risk assessment techniques addressing process incidents triggered by cyberattacks remain largely unstandardized. Contemporary cybersecurity (CS) risk assessments predominantly focus on general Information Technology (IT) risks within business contexts. A notable contributing factor is the persistent misalignment between IT and Operational Technology (OT), including Process Safety (PS). OT professionals often regard CS as the responsibility of IT personnel, while IT teams typically lack familiarity with OT systems. Consequently, integrated IT-OT risk assessments are not widely implemented. This study explores an effective framework and methodology for conducting CS risk assessments specific to process incidents. The research utilizes a typical LNG plant model as the basis for a detailed CS risk assessment. The findings reveal several potential pathways for cyberattacks that could lead to major process incidents, underscoring the criticality of inherent safety measures and effective coordination between CS and PS disciplines. The CS risk assessment framework and procedural guidance detailed in this study are anticipated to significantly enhance the effectiveness of CS risk evaluations and the precise definition of functional requirements to mitigate cybersecurity risks.

Chemical engineering, Computer engineering. Computer hardware
arXiv Open Access 2025
A CMOS Probabilistic Computing Chip With In-situ hardware Aware Learning

Jinesh Jhonsa, William Whitehead, David McCarthy et al.

This paper demonstrates a probabilistic bit physics inspired solver with 440 spins configured in a Chimera graph, occupying an area of 0.44 mm^2. Area efficiency is maximized through a current-mode implementation of the neuron update circuit, standard cell design for analog blocks pitch-matched to digital blocks, and a shared power supply for both digital and analog components. Process variation related mismatches introduced by this approach are effectively mitigated using a hardware aware contrastive divergence algorithm during training. We validate the chip's ability to perform probabilistic computing tasks such as modeling logic gates and full adders, as well as optimization tasks such as MaxCut, demonstrating its potential for AI and machine learning applications.

en cs.AR, cs.AI
DOAJ Open Access 2024
Design and Analysis of a Novel Sidewalk Following Visual Controller for an Autonomous Wheelchair

UGUR, E., KARA, T., ABDULHAFEZ, A. et al.

This paper presents a study that focuses on sidewalk following problem of an autonomous wheelchair. The main goal is to propose a solution to the urban mobility problem of people with walking disabilities. The study offers an efficient control system design for this task. A linearized wheelchair model is constructed and image-based visual servoing is introduced to evaluate the performance of tracking yellow tactile pavement on sidewalk with optimal control. Reference trajectory sets are created using robust vanishing point for sidewalk following by employing the Hough Lines method. These reference paths are tested with two control methods of Linear Quadratic Regulator (LQR) control and Pole Placement (PP) control. Both control methods are applied through simulation on the autonomous wheelchair model, and efficacy of sidewalk following under these control methods is discussed comparatively. Disturbance attenuation results of the given optimal control methods and simulation outputs prove the efficacy of the model and the designed control systems. LQR control method proves to have better performance in system response in comparison to PP control method.

Electrical engineering. Electronics. Nuclear engineering, Computer engineering. Computer hardware
DOAJ Open Access 2024
Faster Complete Addition Laws for Montgomery Curves

Reza Rezaeian Farashahi, Mojtaba Fadavi, Soheila Sabbaghian

An addition law for an elliptic curve is complete if it is defined for all possible pairs of input points on the elliptic curve. In Elliptic Curve Cryptography (ECC), a complete addition law provides a natural protection against side-channel attacks which are based on Simple Power Analysis (SPA). Montgomery curves are a specific family of elliptic curves that play a crucial role in ECC because of its well-known Montgomery ladder, particularly in the Elliptic Curve Diffie-Hellman Key Exchange (ECDHKE) protocol and the Elliptic Curve factorization Method (ECM). However, the complete addition law for Montgomery curves, as stated in the literature, has a computational cost of 14M+ 2D, where M,D denote the costs of a field multiplication and a field multiplication by a constant, respectively. The lack of a competitive complete addition law has led implementers towards twisted Edwards curves, which offer a complete addition law at a lower cost of 8M+ 1D for appropriately chosen curve constants. In this paper, we introduce extended Montgomery coordinates as a novel representation for points on Montgomery curves. This coordinate system enables us to define birational multiplication-free maps between the extended twisted Edwards coordinates and extended Montgomery coordinates. Using this map, we can transfer the complete addition laws from twisted Edwards curves to Montgomery curves without incurring additional multiplications or squarings. In addition, we employ a technique known as scaling to refine the addition laws for twisted Edwards curves, which results in having i) Complete addition laws with the costs varying between 8M+1D and 9M+1D for a broader range of twisted Edwards curves, ii) Incomplete addition laws for twisted Edwards curves with the cost of 8M. Consequently, by leveraging our birational multiplication-free maps, we present complete addition laws for Montgomery curves with the cost of 8M+1D. This shows a significant improvement for complete addition law for Montgomery curves by reducing the computational cost by 6M+ 1D. This improvement makes Montgomery curves a more attractive option for applications where an efficient complete addition law is essential.

Computer engineering. Computer hardware, Information technology
arXiv Open Access 2024
In-Memory Computing Architecture for Efficient Hardware Security

Hala Ajmi, Fakhreddine Zayer, Hamdi Belgacem

This paper presents an innovative approach utilizing in-memory computing (IMC) for the development and integration of AES (Advanced Encryption Standard) cipher technique. Our research aims to enhance cybersecurity measures for a wide range of applications for IoT, such as robotic self-driving and several uses contexts. Memristor (MR) design optimized for in-memory processing is introduced. Our work highlights the development of a 4-bit state memristor device tailored for various range of arithmetic functions in a hardware prototype of AES system. Additionally, we propose a pipeline AES design aimed at harnessing extensive parallelism and ensuring compatibility with MR devices. This approach enhances hardware performance by by managing larger data amounts, accelerating computational, and achieving greater precision demands. Compared to traditional AES hardware, AES-IMC demonstrates an approximate 30 % improvement in power with a comparable throughput rate. Compared with the latest AES-based NVM engines, AES-IMC achieves an impressive 62 % improvement in throughput at similar power dissipation levels. The IMC-developed design will protect against unintentional incidents involving unmanned devices, reducing the risks associated with hostile assaults such as hijacking and illegal control of robots. This helps to reduce the possible economic and financial losses caused by incidents

en cs.AR
arXiv Open Access 2024
Dynamic Neural Communication: Convergence of Computer Vision and Brain-Computer Interface

Ji-Ha Park, Seo-Hyun Lee, Soowon Kim et al.

Interpreting human neural signals to decode static speech intentions such as text or images and dynamic speech intentions such as audio or video is showing great potential as an innovative communication tool. Human communication accompanies various features, such as articulatory movements, facial expressions, and internal speech, all of which are reflected in neural signals. However, most studies only generate short or fragmented outputs, while providing informative communication by leveraging various features from neural signals remains challenging. In this study, we introduce a dynamic neural communication method that leverages current computer vision and brain-computer interface technologies. Our approach captures the user's intentions from neural signals and decodes visemes in short time steps to produce dynamic visual outputs. The results demonstrate the potential to rapidly capture and reconstruct lip movements during natural speech attempts from human neural signals, enabling dynamic neural communication through the convergence of computer vision and brain--computer interface.

en cs.AI
arXiv Open Access 2024
CiMNet: Towards Joint Optimization for DNN Architecture and Configuration for Compute-In-Memory Hardware

Souvik Kundu, Anthony Sarah, Vinay Joshi et al.

With the recent growth in demand for large-scale deep neural networks, compute in-memory (CiM) has come up as a prominent solution to alleviate bandwidth and on-chip interconnect bottlenecks that constrain Von-Neuman architectures. However, the construction of CiM hardware poses a challenge as any specific memory hierarchy in terms of cache sizes and memory bandwidth at different interfaces may not be ideally matched to any neural network's attributes such as tensor dimension and arithmetic intensity, thus leading to suboptimal and under-performing systems. Despite the success of neural architecture search (NAS) techniques in yielding efficient sub-networks for a given hardware metric budget (e.g., DNN execution time or latency), it assumes the hardware configuration to be frozen, often yielding sub-optimal sub-networks for a given budget. In this paper, we present CiMNet, a framework that jointly searches for optimal sub-networks and hardware configurations for CiM architectures creating a Pareto optimal frontier of downstream task accuracy and execution metrics (e.g., latency). The proposed framework can comprehend the complex interplay between a sub-network's performance and the CiM hardware configuration choices including bandwidth, processing element size, and memory size. Exhaustive experiments on different model architectures from both CNN and Transformer families demonstrate the efficacy of the CiMNet in finding co-optimized sub-networks and CiM hardware configurations. Specifically, for similar ImageNet classification accuracy as baseline ViT-B, optimizing only the model architecture increases performance (or reduces workload execution time) by 1.7x while optimizing for both the model architecture and hardware configuration increases it by 3.1x.

en cs.AR, cs.AI
DOAJ Open Access 2023
Extended Analysis of Non-Isolated Bidirectional High Gain Converter

ANJANA, E., RAMAPRABHA, R.

This paper focuses on developing a non-isolated bidirectional high gain converter suited for EV charging. This converter has two ports with bidirectional features which functions in both buck and boost operation. This converter is shown to attain high voltage gain with high efficiency while using less components. To verify the stability of the bidirectional high gain converter, state space averaging is performed and the stability curves are plotted for both boost and buck operation. The buck and boost characteristics of this converter are observed through simulation using MatLab, for 1.2 kW system and results are presented for the analysis. Based on this analysis the hardware of the bidirectional converter is developed and results are obtained and compared.

Electrical engineering. Electronics. Nuclear engineering, Computer engineering. Computer hardware
DOAJ Open Access 2023
Modelling of Carbon Capture Process for Coal-Fired Power Plants in Indonesia

Sanggono Adisasmito, Anggit Raksajati, Alfin Ali et al.

The high consumption of coal in the power generation sector results in high greenhouse gas (GHG) emissions in Indonesia. Indonesian Government still needs to reduce its GHG emissions to below 662 MtCO2e in order to meet the Intergovernmental Panel on Climate Change (IPCC) scenario. This condition encourages the government to develop a strategy for decarbonization as stated in the Long-term Strategy on Low Carbon and Climate Resilience 2050 document. The retrofitting potential of Indonesian coal power plant was evaluated. Several factors such as Levelized Cost of Electricity (LCoE), CO2 emission intensity prior to capture, energy penalty, and the presence of installed flue gas desulfurizer (FGD) were used as determining parameters in selecting priority power plants to be retrofitted. The mass and energy balance of the CCS process was modelled using Aspen HYSYS V12. Based on simulation and techno-economic calculations results, it can be concluded that the LCoE value of CCS-retrofitted coal-fired power plants are influenced by the plant's capacity and the existence of FGD units. The implementation of CCS technology through retrofitting in Indonesia shall be prioritized for 1,000 MW ultra-supercritical power plants that already have existing seawater FGD technology. The increase in costs, together with a decrease in power production, results in an increase in LCoE values of up to USD 0.11/kWh for 1,000 MW power plants. This result is expected to be used as a consideration for the Indonesian government in mapping out a decarbonization strategy in the energy generation sector.

Chemical engineering, Computer engineering. Computer hardware
DOAJ Open Access 2023
<span style="font-variant: small-caps">FairCaipi</span>: A Combination of Explanatory Interactive and Fair Machine Learning for Human and Machine Bias Reduction

Louisa Heidrich, Emanuel Slany, Stephan Scheele et al.

The rise of machine-learning applications in domains with critical end-user impact has led to a growing concern about the fairness of learned models, with the goal of avoiding biases that negatively impact specific demographic groups. Most existing bias-mitigation strategies adapt the importance of data instances during pre-processing. Since fairness is a contextual concept, we advocate for an interactive machine-learning approach that enables users to provide iterative feedback for model adaptation. Specifically, we propose to adapt the explanatory interactive machine-learning approach <span style="font-variant: small-caps;">Caipi</span> for fair machine learning. <span style="font-variant: small-caps;">FairCaipi</span> incorporates human feedback in the loop on predictions and explanations to improve the fairness of the model. Experimental results demonstrate that <span style="font-variant: small-caps;">FairCaipi</span> outperforms a state-of-the-art pre-processing bias mitigation strategy in terms of the fairness and the predictive performance of the resulting machine-learning model. We show that <span style="font-variant: small-caps;">FairCaipi</span> can both uncover and reduce bias in machine-learning models and allows us to detect human bias.

Computer engineering. Computer hardware
arXiv Open Access 2023
Structure and computability of preimages in the Game of Life

Ville Salo, Ilkka Törmä

Conway's Game of Life is a two-dimensional cellular automaton. As a dynamical system, it is well-known to be computationally universal, i.e.\ capable of simulating an arbitrary Turing machine. We show that in a sense taking a single backwards step of the Game of Life is a computationally universal process, by constructing patterns whose preimage computation encodes an arbitrary circuit-satisfaction problem, or, equivalently, any tiling problem. As a corollary, we obtain for example that the set of orphans is coNP-complete, exhibit a $6210 \times 37800$-periodic configuration whose preimage is nonempty but contains no periodic configurations, and prove that the existence of a preimage for a periodic point is undecidable. Our constructions were obtained by a combination of computer searches and manual design.

en cs.FL, cs.DM
arXiv Open Access 2023
Variation Enhanced Attacks Against RRAM-based Neuromorphic Computing System

Hao Lv, Bing Li, Lei Zhang et al.

The RRAM-based neuromorphic computing system has amassed explosive interests for its superior data processing capability and energy efficiency than traditional architectures, and thus being widely used in many data-centric applications. The reliability and security issues of the NCS therefore become an essential problem. In this paper, we systematically investigated the adversarial threats to the RRAM-based NCS and observed that the RRAM hardware feature can be leveraged to strengthen the attack effect, which has not been granted sufficient attention by previous algorithmic attack methods. Thus, we proposed two types of hardware-aware attack methods with respect to different attack scenarios and objectives. The first is adversarial attack, VADER, which perturbs the input samples to mislead the prediction of neural networks. The second is fault injection attack, EFI, which perturbs the network parameter space such that a specified sample will be classified to a target label, while maintaining the prediction accuracy on other samples. Both attack methods leverage the RRAM properties to improve the performance compared with the conventional attack methods. Experimental results show that our hardware-aware attack methods can achieve nearly 100% attack success rate with extremely low operational cost, while maintaining the attack stealthiness.

en cs.CR, cs.AI
arXiv Open Access 2023
Quantum Computing and Visualization: A Disruptive Technological Change Ahead

E. Wes Bethel, Mercy G. Amankwah, Jan Balewski et al.

The focus of this Visualization Viewpoints article is to provide some background on Quantum Computing (QC), to explore ideas related to how visualization helps in understanding QC, and examine how QC might be useful for visualization with the growth and maturation of both technologies in the future. In a quickly evolving technology landscape, QC is emerging as a promising pathway to overcome the growth limits in classical computing. In some cases, QC platforms offer the potential to vastly outperform the familiar classical computer by solving problems more quickly or that may be intractable on any known classical platform. As further performance gains for classical computing platforms are limited by diminishing Moore's Law scaling, QC platforms might be viewed as a potential successor to the current field of exascale-class platforms. While present-day QC hardware platforms are still limited in scale, the field of quantum computing is robust and rapidly advancing in terms of hardware capabilities, software environments for developing quantum algorithms, and educational programs for training the next generation of scientists and engineers. After a brief introduction to QC concepts, the focus of this article is to explore the interplay between the fields of visualization and QC. First, visualization has played a role in QC by providing the means to show representations of the quantum state of single-qubits in superposition states and multiple-qubits in entangled states. Second, there are a number of ways in which the field of visual data exploration and analysis may potentially benefit from this disruptive new technology though there are challenges going forward.

en quant-ph, cs.ET
arXiv Open Access 2023
Stella Nera: A Differentiable Maddness-Based Hardware Accelerator for Efficient Approximate Matrix Multiplication

Jannis Schönleber, Lukas Cavigelli, Matteo Perotti et al.

Artificial intelligence has surged in recent years, with advancements in machine learning rapidly impacting nearly every area of life. However, the growing complexity of these models has far outpaced advancements in available hardware accelerators, leading to significant computational and energy demands, primarily due to matrix multiplications, which dominate the compute workload. Maddness (i.e., Multiply-ADDitioN-lESS) presents a hash-based version of product quantization, which renders matrix multiplications into lookups and additions, eliminating the need for multipliers entirely. We present Stella Nera, the first Maddness-based accelerator achieving an energy efficiency of 161 TOp/s/W@0.55V, 25x better than conventional MatMul accelerators due to its small components and reduced computational complexity. We further enhance Maddness with a differentiable approximation, allowing for gradient-based fine-tuning and achieving an end-to-end performance of 92.5% Top-1 accuracy on CIFAR-10.

en cs.AR, cs.CV
DOAJ Open Access 2022
Intelligent Monitoring Model for Aggregated Infection Risk Against the Background of COVID-19 Epidemic

CHUN Yutong, HAN Feiteng, HE Mingke

The Corona Virus Disease 2019(COVID-19) epidemic is a serious threat to people's lives.Supervision of the density of clustered people and wearing of masks is key to controlling the virus.Public places are characterized by a dense flow of people and high mobility.Manual monitoring can easily increase the risk of infection, and existing mask detection algorithms based on deep learning suffer from the limitation of having a single function and can be applied to only a single type of scenes; as such, they cannot achieve multi-category detection across multiple scenes.Furthermore, their accuracy needs to be improved.The Cascade-Attention R-CNN target detection algorithm is proposed for realizing the automatic detection of aggregations in areas, pedestrians, and face masks.Aiming to solve the problem that the target scale changes too significantly during the task, a high-precision two-stage Cascade R-CNN target detection algorithm is selected as the basic detection framework.By designing multiple cascaded candidate classification regression networks and adding a spatial attention mechanism, we highlight the important features of the candidate region features and suppress noise features to improve the detection accuracy.Based on this, an intelligent monitoring model for aggregated infection risk is constructed, and the infection risk level is determined by combining the outputs of the proposed algorithm.The experimental results show that the model has high accuracy and robustness for multi-category target images with different scenes and perspectives.The average accuracy of the Cascade Attention R-CNN algorithm reaches 89.4%, which is 2.6 percentage points higher than that of the original Cascade R-CNN algorithm, and 10.1 and 8.4 percentage points higher than those of the classic two-stage target detection algorithm, Faster R-CNN and the single-stage target detection framework, RetinaNet, respectively.

Computer engineering. Computer hardware, Computer software
arXiv Open Access 2022
Vicious Classifiers: Assessing Inference-time Data Reconstruction Risk in Edge Computing

Mohammad Malekzadeh, Deniz Gunduz

Privacy-preserving inference in edge computing paradigms encourages the users of machine-learning services to locally run a model on their private input and only share the models outputs for a target task with the server. We study how a vicious server can reconstruct the input data by observing only the models outputs while keeping the target accuracy very close to that of a honest server by jointly training a target model (to run at users' side) and an attack model for data reconstruction (to secretly use at servers' side). We present a new measure to assess the inference-time reconstruction risk. Evaluations on six benchmark datasets show the model's input can be approximately reconstructed from the outputs of a single inference. We propose a primary defense mechanism to distinguish vicious versus honest classifiers at inference time. By studying such a risk associated with emerging ML services our work has implications for enhancing privacy in edge computing. We discuss open challenges and directions for future studies and release our code as a benchmark for the community at https://github.com/mmalekzadeh/vicious-classifiers .

en cs.LG, cs.CR
arXiv Open Access 2022
Text and Team: What Article Metadata Characteristics Drive Citations in Software Engineering?

Lorenz Graf-Vlachy, Daniel Graziotin, Stefan Wagner

Context: Citations are a key measure of scientific performance in most fields, including software engineering. However, there is limited research that studies which characteristics of articles' metadata (title, abstract, keywords, and author list) are driving citations in this field. Objective: In this study, we propose a simple theoretical model for how citations come to be with respect to article metadata, we hypothesize theoretical linkages between metadata characteristics and citations of articles, and we empirically test these hypotheses. Method: We use multiple regression analyses to examine a data set comprising the titles, abstracts, keywords, and authors of 16,131 software engineering articles published between 1990 and 2020 in 20 highly influential software engineering venues. Results: We find that number of authors, number of keywords, number of question marks and dividers in the title, number of acronyms, abstract length, abstract propositional idea density, and corresponding authors in the core Anglosphere are significantly related to citations. Conclusion: Various characteristics of articles' metadata are linked to the frequency with which the corresponding articles are cited. These results partially confirm and partially go counter to prior findings in software engineering and other disciplines.

DOAJ Open Access 2021
An Improved Real-Time Semi-Global Stereo Matching Algorithm and Its Hardware Implementation

ZHAO Chenyuan, LI Wenxin, ZHANG Qingxi

When applied to the real-time stereo matching systems based on Field Programmable Gate Array(FPGA), the Census Transform(CT) algorithm has a high false matching rate in specific areas.In order to improve the matching accuracy, a real-time Semi-Global stereo Matching(SGM) algorithm with highly parallel pipeline structure is proposed.The algorithm takes the combination of the improved Tanimoto distance and the weighted 4-direction Absolute Differences of Gradient(ADG) as the initial matching cost.In the cost aggregation stage, a 4-path SMG algorithm is used.In the disparity computation stage, the winner-takes-all strategy is chosen.Then in the parallax correction stage, the threshold detection algorithm is used to replace the traditional left-right check algorithm. Experimental results show that the proposed algorithm can effectively improve the discrimination between weak texture and edge regions, and reduce the dependence on center point and resource consumption.The average mismatch rate of the algorithm on Middlebury platform is 7.52%, and its matching rate on Xilinx zynq-7000 platform is 98 frame/s.

Computer engineering. Computer hardware, Computer software
arXiv Open Access 2021
NAX: Co-Designing Neural Network and Hardware Architecture for Memristive Xbar based Computing Systems

Shubham Negi, Indranil Chakraborty, Aayush Ankit et al.

In-Memory Computing (IMC) hardware using Memristive Crossbar Arrays (MCAs) are gaining popularity to accelerate Deep Neural Networks (DNNs) since it alleviates the "memory wall" problem associated with von-Neumann architecture. The hardware efficiency (energy, latency and area) as well as application accuracy (considering device and circuit non-idealities) of DNNs mapped to such hardware are co-dependent on network parameters, such as kernel size, depth etc. and hardware architecture parameters such as crossbar size. However, co-optimization of both network and hardware parameters presents a challenging search space comprising of different kernel sizes mapped to varying crossbar sizes. To that effect, we propose NAX -- an efficient neural architecture search engine that co-designs neural network and IMC based hardware architecture. NAX explores the aforementioned search space to determine kernel and corresponding crossbar sizes for each DNN layer to achieve optimal tradeoffs between hardware efficiency and application accuracy. Our results from NAX show that the networks have heterogeneous crossbar sizes across different network layers, and achieves optimal hardware efficiency and accuracy considering the non-idealities in crossbars. On CIFAR-10 and Tiny ImageNet, our models achieve 0.8%, 0.2% higher accuracy, and 17%, 4% lower EDAP (energy-delay-area product) compared to a baseline ResNet-20 and ResNet-18 models, respectively.

en cs.ET, cs.AI

Halaman 13 dari 411240