Habtamu Demeke Mihertie, Zhengqiang Wang
Hasil untuk "Computer engineering. Computer hardware"
Menampilkan 20 dari ~8510068 hasil · dari CrossRef, DOAJ, Semantic Scholar, arXiv
Brylle Reovince Rosales, Daniela Bag-ao, John Ogad et al.
This study utilized an unsteady multiphase Computational Fluid Dynamics model in ANSYS FLUENT 2024 R2 to investigate the impact of semicircular baffles on the hydrodynamics and NOx removal efficiency in a fluidized bed reactor. Both standard and fast Selective Catalytic Reduction mechanisms were considered in the reactor employing the CuO/?-Al2O3 as catalyst. The baffle-free reactor (FFB) model was validated against experimental data through mesh optimization, and kinetic parameter calibration. Subsequently, systematic simulations of 27 baffled configurations (single, double, and triple baffles) demonstrated significant improvements in NO reduction. At 300 °C, configurations DB3, TB6, and TB14 generated the highest conversions of 93.20 %, 93.33 %, and 93.12 %, respectively. Notably, at 250 °C, these configurations maintained high efficiencies suggesting that the addition of baffles could replicate the performance of FFB even at a lower temperature. The study revealed that the semicircular baffles increased the solid holdup, radial gas velocity, and granular temperature, thereby enhancing gas–solid interactions.
Jui-Sheng Chou, Nguyen-Ngan-Hanh Pham
Abstract Effective risk management is crucial in the construction industry, which has a substantial economic impact but is vulnerable to high financial risks due to volatile material costs and complex project-based financial structures. This study presents a new hybrid model to improve the prediction of financial distress for Taiwanese-listed construction companies. The research compares four boosting-based ensemble learning models, advanced deep learning models, and improved ensemble models that incorporate a novel approach using the Multi-Criteria Decision-Making (MCDM) technique, the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), to enhance feature selection. Experimental results show that while TOPSIS-eXtreme Gradient Boosting (TOPSIS-XGBoost) is highly effective at managing imbalanced financial datasets, Light Gradient Boosting Machine (LightGBM) performs better in balanced environments. Both models exhibit substantial performance gains when integrated with the Forensic-Based Investigation (FBI) optimization algorithm, resulting in the optimized hybrids—FBI-TOPSIS-XGBoost and FBI-LightGBM—which achieve marked improvements in predictive accuracy. These optimized models consistently outperform benchmark approaches, including the Altman Z-score, Zmijewski X-score, Logistic Regression, and Random Forest, across multiple evaluation metrics. To enhance transparency and interpretability, a global SHapley Additive exPlanations (SHAP) analysis was conducted, revealing that profitability and per-share index indicators are the primary determinants driving model predictions. Additionally, an expert system interface has been developed to enhance the practical usability of these models. These findings strengthen the methodological foundation for predicting financial distress and provide stakeholders with valuable tools for mitigating risk in Taiwan’s construction industry.
Martin Obaidi, Marc Herrmann, Elisa Schmid et al.
Sentiment analysis is an essential technique for investigating the emotional climate within developer teams, contributing to both team productivity and project success. Existing sentiment analysis tools in software engineering primarily rely on English or non-German gold-standard datasets. To address this gap, our work introduces a German dataset of 5,949 unique developer statements, extracted from the German developer forum Android-Hilfe.de. Each statement was annotated with one of six basic emotions, based on the emotion model by Shaver et al., by four German-speaking computer science students. Evaluation of the annotation process showed high interrater agreement and reliability. These results indicate that the dataset is sufficiently valid and robust to support sentiment analysis in the German-speaking software engineering community. Evaluation with existing German sentiment analysis tools confirms the lack of domain-specific solutions for software engineering. We also discuss approaches to optimize annotation and present further use cases for the dataset.
Grigore Mihai Timis, Alexandru Valachi
This paper presents an analysis of a sequential Triggered “Time Bomb” hardware Trojan (HT). Major security concerns have been rising up since malicious modification of hardware or fabrication IC can lead to an altered functional behavior, potentially with disastrous consequences in safety-critical applications. Due to the stealthy nature of the hardware Trojans the conventional design and time verification and post manufacturing testing cannot be readily extended. There are a large number of possible instances and operating modes for the hardware Trojans in a digital system. Since the hardware Trojan insertion can modify the functionality of the digital integrated circuit (IC), alter its behavior, generate denial of service (DoS), the HT threats should be analyzed with maximum importance through the entire lifecycle of the IC.
Rui Gao, Wenliang Zhang
Zhicheng Liu, Chen Chen, John Hooker
Various data visualization applications such as reverse engineering and interactive authoring require a vocabulary that describes the structure of visualization scenes and the procedure to manipulate them. A few scene abstractions have been proposed, but they are restricted to specific applications for a limited set of visualization types. A unified and expressive model of data visualization scenes for different applications has been missing. To fill this gap, we present Manipulable Semantic Components (MSC), a computational representation of data visualization scenes, to support applications in scene understanding and augmentation. MSC consists of two parts: a unified object model describing the structure of a visualization scene in terms of semantic components, and a set of operations to generate and modify the scene components. We demonstrate the benefits of MSC in three applications: visualization authoring, visualization deconstruction and reuse, and animation specification.
E. Praveen Kumar, S. Priyanka
Jie Song, Haifei Fu, Tianzhe Jiao et al.
Abstract This paper presents an interesting case study on Legacy Data Integration (LDI for short) for a Regional Cloud Arbitration Court. Due to the inconsistent structure and presentation, legacy arbitration cases can hardly integrate into the Cloud Court unless processed manually. In this study, we propose an AI-enabled LDI method to replace the costly manual approach and ensure privacy protection during the process. We trained AI models to replace tasks such as reading and understanding legacy cases, removing privacy information, composing new case records, and inputting them through the system interfaces. Our approach employs Optical Character Recognition (OCR), text classification, and Named Entity Recognition (NER) to transform legacy data into a system format. We applied our method to a Cloud Arbitration Court in Liaoning Province, China, and achieved a comparable privacy filtering effect while retaining the maximum amount of information. Our method demonstrated similar effectiveness as the manual LDI, but with greater efficiency, saving 90% of the workforce and achieving a 60%-70% information extraction rate compared to manual work. With the increasing development of informationalization and intelligentization in judgment and arbitration, many courts are adopting ABC technologies, namely Artificial intelligence, Big data, and Cloud computing, to build the court system. Our method provides a practical reference for integrating legal data into the system.
Aakash Ahmad, Muhammad Waseem, Peng Liang et al.
Quantum systems have started to emerge as a disruptive technology and enabling platforms - exploiting the principles of quantum mechanics - to achieve quantum supremacy in computing. Academic research, industrial projects (e.g., Amazon Braket), and consortiums like 'Quantum Flagship' are striving to develop practically capable and commercially viable quantum computing (QC) systems and technologies. Quantum Computing as a Service (QCaaS) is viewed as a solution attuned to the philosophy of service-orientation that can offer QC resources and platforms, as utility computing, to individuals and organisations who do not own quantum computers. To understand the quantum service development life cycle and pinpoint emerging trends, we used evidence-based software engineering approach to conduct a systematic mapping study (SMS) of research that enables or enhances QCaaS. The SMS process retrieved a total of 55 studies, and based on their qualitative assessment we selected 9 of them to investigate (i) the functional aspects, design models, patterns, programming languages, deployment platforms, and (ii) trends of emerging research on QCaaS. The results indicate three modelling notations and a catalogue of five design patterns to architect QCaaS, whereas Python (native code or frameworks) and Amazon Braket are the predominant solutions to implement and deploy QCaaS solutions. From the quantum software engineering (QSE) perspective, this SMS provides empirically grounded findings that could help derive processes, patterns, and reference architectures to engineer software services for QC.
Louis Andreoli, Stéphane Chrétien, Xavier Porte et al.
Hardware implementation of neural network are an essential step to implement next generation efficient and powerful artificial intelligence solutions. Besides the realization of a parallel, efficient and scalable hardware architecture, the optimization of the system's extremely large parameter space with sampling-efficient approaches is essential. Here, we analytically derive the scaling laws for highly efficient Coordinate Descent applied to optimizing the readout layer of a random recurrently connection neural network, a reservoir. We demonstrate that the convergence is exponential and scales linear with the network's number of neurons. Our results perfectly reproduce the convergence and scaling of a large-scale photonic reservoir implemented in a proof-of-concept experiment. Our work therefore provides a solid foundation for such optimization in hardware networks, and identifies future directions that are promising for optimizing convergence speed during learning leveraging measures of a neural network's amplitude statistics and the weight update rule.
Lisvan Guevara Trujillo, Leandro Zambrano Méndez, Wenny Hojas Mazo et al.
Un visualizador de radar es un dispositivo electrónico que se utiliza para presentar una imagen gráfica continua, que permita comprender de forma fácil la posición relativa de los objetos detectados. Los visualizadores modernos realizan la representación a partir de la obtención de señales digitales. La mayoría de los visualizadores de radares de seguimiento con que se cuenta están sustentados en tecnología analógica. Estos son altos consumidores de energía eléctrica, poseen gran volumen y peso, sus piezas están obsoletas y existe incapacidad de adquirir los repuestos en el mercado internacional. Estos visualizadores están sometidos a largos períodos de mantenimiento, al presentar inestabilidad durante su funcionamiento. El objetivo de este artículo es diseñar e implementar un visualizador digital de la información de señales de radar que cumpla con el período de actualización de un radar de seguimiento. En la solución propuesta se emplea el ordenador de placa reducida Raspberry Pi 4 y el software fue elaborado en el entorno de desarrollado multiplataforma Qt Creator. Además, se empleó la computación paralela para lograr la recepción, el procesamiento y la representación de los datos cumpliendo con el período de actualización de la información del radar de seguimiento. Esto propició disminuir el tiempo de ejecución del software y cumplir con el período de actualización del radar, a diferencia de la versión secuencial donde no se cumplió con la restricción temporal. Como resultado se obtuvo una sustitución tecnológica del sistema visualizador analógico que posee bajo consumo eléctrico, costo y tamaño.
Ali Hamid Farea, Kerem Küçük
Once hardware becomes "intelligent", it is vulnerable to threats. Therefore, IoT ecosystems are susceptible to a variety of attacks and are considered challenging due to heterogeneity and dynamic ecosystem. In this study, we proposed a method for detecting IoT attacks that are based on ML-based approaches that release the final decision to detect IoT attacks. However, we have implemented three attacks as a sample in the IoT via Contiki OS to generate a real dataset of IoT-based features containing a mix of data from malicious nodes and normal nodes in the IoT network to be utilized in the ML-based models. As a result, the multiclass random decision forest ML-based model achieved 98.9% overall accuracy in detecting IoT attacks for the real novel dataset compared to the decision tree jungle, decision forest tree regression, and boosted decision tree regression, which achieved 87.7%, 93.2%, and 87.1%, respectively. Thus, the decision tree-based approach efficiently manipulates and analyzes the KoÜ-6LoWPAN-IoT dataset, generated via the Cooja simulator, to detect inconsistent behavior and classify malicious activities.
Weina Wang
In order to improve the effect of future environmental landscape design, this study combines artificial intelligence technology and digital space technology to construct an environmental landscape design system. Moreover, this study uses polygons to model external landscape plants instead of modeling the microelement structure of landscape plants and re-polygonize them, which simplifies the model structure of landscape plants and improves computational efficiency. In addition, this study conducts collision detection in the growth of landscape plants to make it more realistic and more efficient and studies and analyzes the digital construction of landscape plants. Finally, after constructing an intelligent system, the effect of the system in this study is verified. Through data analysis, it can be seen that the environmental landscape design system based on artificial intelligence and digital space technology proposed in this study has good digital space structure expression effect and design effect.
Luca Bertaccini, Gianna Paulin, Tim Fischer et al.
Low-precision formats have recently driven major breakthroughs in neural network (NN) training and inference by reducing the memory footprint of the NN models and improving the energy efficiency of the underlying hardware architectures. Narrow integer data types have been vastly investigated for NN inference and have successfully been pushed to the extreme of ternary and binary representations. In contrast, most training-oriented platforms use at least 16-bit floating-point (FP) formats. Lower-precision data types such as 8-bit FP formats and mixed-precision techniques have only recently been explored in hardware implementations. We present MiniFloat-NN, a RISC-V instruction set architecture extension for low-precision NN training, providing support for two 8-bit and two 16-bit FP formats and expanding operations. The extension includes sum-of-dot-product instructions that accumulate the result in a larger format and three-term additions in two variations: expanding and non-expanding. We implement an ExSdotp unit to efficiently support in hardware both instruction types. The fused nature of the ExSdotp module prevents precision losses generated by the non-associativity of two consecutive FP additions while saving around 30% of the area and critical path compared to a cascade of two expanding fused multiply-add units. We replicate the ExSdotp module in a SIMD wrapper and integrate it into an open-source floating-point unit, which, coupled to an open-source RISC-V core, lays the foundation for future scalable architectures targeting low-precision and mixed-precision NN training. A cluster containing eight extended cores sharing a scratchpad memory, implemented in 12 nm FinFET technology, achieves up to 575 GFLOPS/W when computing FP8-to-FP16 GEMMs at 0.8 V, 1.26 GHz.
Siyuan Ji, Michael Wilkinson, Charles E. Dickerson
In this third decade of systems engineering in the twenty-first century, it is important to develop and demonstrate practical methods to exploit machine-readable models in the engineering of systems. Substantial investment has been made in languages and modelling tools for developing models. A key problem is that system architects and engineers work in a multidisciplinary environment in which models are not the product of any one individual. This paper provides preliminary results of a formal approach to specify models and structure preserving transformations between them that support model synchronization. This is an important area of research and practice in software engineering. However, it is limited to synchronization at the code level of systems. This paper leverages previous research of the authors to define a core fractal for interpretation of concepts into model specifications and transformation between models. This fractal is used to extend the concept of synchronization of models to the system level and is demonstrated through a practical engineering example for an advanced driver assistance system.
Hiroki Shigemune, Kittamet Pradidarcheep, Yu Kuwajima et al.
Autonomous soft robots require compact actuators generating large strokes and high forces. Electro‐fluidic actuators are especially promising, they combine the advantages of electroactive polymers (low‐power consumption, fast response, and electrical powering) with the versatility of fluidic systems (force/stroke amplification). EHD (electrohydrodynamic) actuators are electro‐fluidic actuators whose motion results from charges being induced and accelerated in a liquid. They are extremely compact, silent, and low power (≤10 mW). They have been recently demonstrated in stretchable pumps and for the wireless propulsion of simple floating robots. This study demonstrates simultaneous wireless propulsion (2.5 mm s−1) and control of a 1 cm sized robot using a single DC signal. Voltage is applied between an electrode on the floating robot and a fixed one, both exposed to a dielectric liquid. Results support the underlying physical mechanism as EHD and characterize robot motion with different fluorocarbon liquids and voltages between 400 and 1800 V. Path following is demonstrated with a 3 × 3 array of electrodes. EHD actuators prove to be a simple, compact, low power alternative to magnetic and acoustic actuators for wireless powering and control of miniaturized robots, with applications in precision assembling at the micro/mesoscale, lab‐on‐chip, tactile displays, and active surfaces.
Lucia Baldino, Stefano Cardea
In the last years, biolpolymeric porous structures have acquired an increasing importance in different fields of engineering, ranging from chemical engineering to tissue engineering. Until now, various processes have been implemented for the generation of porous structures, but they are all characterized by several limits, such as long processing times, traces of organic solvents in the final products, low versatility, etc. In this work, we tested a green process assisted by supercritical fluids for the generation of biopolymeric porous structures: the supercritical phase inversion process. We processed different biopolymers such as Polysulfone, Polymethylmethacrylate and Polyvinyl alcohol, and analyzed the effect of process parameters (pressure, temperature, polymer concentration, kind of solvents) on the final morphology. The results confirmed the advantages of the supercritical fluids assisted process with respect to the traditional ones: indeed, dry porous structures were obtained in few hours; moreover, changing the parameters, it was possible to control the kind of structures obtained (from cellular one to bicontinuous) and the size of the pores and porosity (from 70 to 90%); finally, the structures were characterized by residual solvents amount lower than 5 ppm.
Léo Exibard, Emmanuel Filiot, Nathan Lhote et al.
In this paper, we investigate the problem of synthesizing computable functions of infinite words over an infinite alphabet (data $ω$-words). The notion of computability is defined through Turing machines with infinite inputs which can produce the corresponding infinite outputs in the limit. We use non-deterministic transducers equipped with registers, an extension of register automata with outputs, to describe specifications. Being non-deterministic, such transducers may not define functions but more generally relations of data $ω$-words. In order to increase the expressive power of these machines, we even allow guessing of arbitrary data values when updating their registers. For functions over data $ω$-words, we identify a sufficient condition (the possibility of determining the next letter to be outputted, which we call next letter problem) under which computability (resp. uniform computability) and continuity (resp. uniform continuity) coincide. We focus on two kinds of data domains: first, the general setting of oligomorphic data, which encompasses any data domain with equality, as well as the setting of rational numbers with linear order; and second, the set of natural numbers equipped with linear order. For both settings, we prove that functionality, i.e. determining whether the relation recognized by the transducer is actually a function, is decidable. We also show that the so-called next letter problem is decidable, yielding equivalence between (uniform) continuity and (uniform) computability. Last, we provide characterizations of (uniform) continuity, which allow us to prove that these notions, and thus also (uniform) computability, are decidable. We even show that all these decision problems are PSpace-complete for $(\mathbb{N},<)$ and for a large class of oligomorphic data domains, including for instance $(\mathbb{Q},<)$.
Andrey Brito, Christof Fetzer, Stefan Köpsell et al.
Abstract Cloud computing considerably reduces the costs of deploying applications through on-demand, automated and fine-granular allocation of resources. Even in private settings, cloud computing platforms enable agile and self-service management, which means that physical resources are shared more efficiently. Cloud computing considerably reduces the costs of deploying applications through on-demand, automated and fine-granular allocation of resources. Even in private settings, cloud computing platforms enable agile and self-service management, which means that physical resources are shared more efficiently. Nevertheless, using shared infrastructures also creates more opportunities for attacks and data breaches. In this paper, we describe the SecureCloud approach. The SecureCloud project aims to enable confidentiality and integrity of data and applications running in potentially untrusted cloud environments. The project leverages technologies such as Intel SGX, OpenStack and Kubernetes to provide a cloud platform that supports secure applications. In addition, the project provides tools that help generating cloud-native, secure applications and services that can be deployed on potentially untrusted clouds. The results have been validated in a real-world smart grid scenario to enable a data workflow that is protected end-to-end: from the collection of data to the generation of high-level information such as fraud alerts.
Halaman 34 dari 425504