The current multikey fully homomorphic encryption (MKFHE) needs to add exponential noise in the distributed decryption phase to ensure the simulatability of partial decryption. Such a large noise causes the ciphertext modulus of the scheme to increase exponentially compared to the single-key fully homomorphic encryption (FHE), further reducing the efficiency of the scheme and making the hardness problem on the lattice on which the scheme relies have a subexponential approximation factor O~n⋅2nL (which means that the security of the scheme is reduced). To address this problem, this paper analyzes in detail the noise in partial decryption of the MKFHE based on the learning with error (LWE) problem. It points out that this part of the noise is composed of private key and the noise in initial ciphertext. Therefore, as long as the encryption scheme is leak-resistant and the noise in partial decryption is independent of the noise in the initial ciphertext, the semantic security of the ciphertext can be guaranteed. In order to make the noise in the initial ciphertext independent of the noise in the partial decryption, this paper proves the smudging lemma on discrete Gaussian distribution and achieves this goal by multiplying the initial ciphertext by a “dummy” ciphertext with a plaintext of 1. Based on the above method, this paper removes the exponential noise in the distributed decryption phase for the first time and reduces the ciphertext modulus of MKFHE from 2ωλL logλ to 2Oλ+L as the same level as the FHE.
Laura Cirrincione, Gianluca Scaccianoce, Marco Vocciante
This work reviews and discusses experimental and simulative studies related to nanofluid applications in buildings energy systems and envelope components, including photovoltaic thermal systems, Heating, Ventilation and Air Conditioning (HVAC) systems, thermal energy storage systems and windows. An overall review of the current studies available in the literature has been conducted, providing an overview of the potential benefits of using the main nanofluid-based techniques in the building sector, with reference to both energy and environmental considerations. The results show promising prospects for future developments, proposing the use of nanofluids for applications in buildings as a viable and effective sustainable solution, in line with what is included in the most relevant energy and environmental initiatives, such as the Sustainable Development Goals (SDGs) and EU Green Deal.
Chemical engineering, Computer engineering. Computer hardware
Missing data (MD) is a prevalent issue that researchers and data scientists frequently encounter. It can significantly
impact the quality of analyzed data, affecting the relevance of the interpreted results and the inferred conclusions.
In response to this challenge, a novel multi-imputation technique that combines Multivariate Imputation by Chained
Equation (MICE) with Decision Tree (DT), namely (MICE-DT), is proposed. This developed method was evaluated against
several established imputation techniques, including K-Nearest Neighbors (KNN), K-Means clustering, Decision Tree
(DT), and MICE, under the assumption of Missing at Random (MAR). The performance of the MICE-DT algorithm, along
with the comparative analysis of the studied techniques, was demonstrated on a Wind Energy Conversion System (WEC),
yielding satisfactory results.
A key component of any robot is the interface between robotics middleware software and physical motors. New robots often use arbitrary, messy mixtures of closed and open motor drivers and error-prone physical mountings, wiring, and connectors to interface them. There is a need for a standardizing OSH component to abstract this complexity, as Arduino did for interfacing to smaller components. We present an OSH printed circuit board to solve this problem once and for all. On the high-level side, it interfaces to Arduino Giga – acting as an unusually large and robust shield – and thus to existing open source software stacks. A ROS2 interface is provided. On the lower-level side, it interfaces to existing emerging standard open hardware, including OSH motor drivers and relays, which can already be used to drive fully open hardware wheeled and arm robots. This enables the creation of a family of standardized, fully open hardware, fully reproducible, research platforms.
The performance of current quantum hardware is severely limited. While expanding the quantum ISA with high-fidelity, expressive basis gates is a key path forward, it imposes significant gate calibration overhead and complicates compiler optimization. As a result, even though more powerful ISAs have been designed, their use remains largely conceptual rather than practical. To move beyond these hurdles, we introduce the concept of "reconfigurable quantum instruction set computers" (ReQISC), which incorporates: (1) a unified microarchitecture capable of directly implementing arbitrary 2Q gates equivalently, i.e., SU(4) modulo 1Q rotations, with theoretically optimal gate durations given any 2Q coupling Hamiltonians; (2) a compilation framework tailored to ReQISC primitives for end-to-end synthesis and optimization, comprising a program-aware pass that refines high-level representations, a program-agnostic pass for aggressive circuit-level optimization, and an SU(4)-aware routing pass that minimizes hardware mapping overhead. We detail the hardware implementation to demonstrate the feasibility, in terms of both pulse control and calibration of this superior gate scheme on realistic hardware. By leveraging the expressivity of SU(4) and the time minimality realized by the underlying microarchitecture, the SU(4)-based ISA achieves remarkable performance, with a 4.97-fold reduction in average pulse duration to implement arbitrary 2Q gates, compared to the usual CNOT/CZ scheme on mainstream flux-tunable transmons. Supported by the end-to-end compiler, ReQISC outperforms the conventional CNOT-ISA, SOTA compiler, and pulse implementation counterparts, in significantly reducing 2Q gate counts, circuit depth, pulse duration, qubit mapping overhead, and program fidelity losses. For the first time, ReQISC makes the theoretical benefits of continuous ISAs practically feasible.
Rosario Patanè, Nadjib Achir, Andrea Araldo
et al.
Edge Computing (EC) is a computational paradigm that involves deploying resources such as CPUs and GPUs near end-users, enabling low-latency applications like augmented reality and real-time gaming. However, deploying and maintaining a vast network of EC nodes is costly, which can explain its limited deployment today. A new paradigm called Vehicular Cloud Computing (VCC) has emerged and inspired interest among researchers and industry. VCC opportunistically utilizes existing and idle vehicular computational resources for external task offloading. This work is the first to systematically address the following question: Can VCC replace EC for low-latency applications? Answering this question is highly relevant for Network Operators (NOs), as VCC could eliminate costs associated with EC given that it requires no infrastructural investment. Despite its potential, no systematic study has yet explored the conditions under which VCC can effectively support low-latency applications without relying on EC. This work aims to fill that gap. Extensive simulations allow for assessing the crucial scenario factors that determine when this EC-to-VCC substitution is feasible. Considered factors are load, vehicles mobility and density, and availability. Potential for substitution is assessed based on multiple criteria, such as latency, task completion success, and cost. Vehicle mobility is simulated in SUMO, and communication in NS3 5G-LENA. The findings show that VCC can effectively replace EC for low-latency applications, except in extreme cases when the EC is still required (latency < 16 ms).
Large Language Models (LLMs) are increasingly integrated into software applications, giving rise to a broad class of prompt-enabled systems, in which prompts serve as the primary 'programming' interface for guiding system behavior. Building on this trend, a new software paradigm, promptware, has emerged, which treats natural language prompts as first-class software artifacts for interacting with LLMs. Unlike traditional software, which relies on formal programming languages and deterministic runtime environments, promptware is based on ambiguous, unstructured, and context-dependent natural language and operates on LLMs as runtime environments, which are probabilistic and non-deterministic. These fundamental differences introduce unique challenges in prompt development. In practice, prompt development remains largely ad hoc and relies heavily on time-consuming trial-and-error, a challenge we term the promptware crisis. To address this, we propose promptware engineering, a new methodology that adapts established Software Engineering (SE) principles to prompt development. Drawing on decades of success in traditional SE, we envision a systematic framework encompassing prompt requirements engineering, design, implementation, testing, debugging, evolution, deployment, and monitoring. Our framework re-contextualizes emerging prompt-related challenges within the SE lifecycle, providing principled guidance beyond ad-hoc practices. Without the SE discipline, prompt development is likely to remain mired in trial-and-error. This paper outlines a comprehensive roadmap for promptware engineering, identifying key research directions and offering actionable insights to advance the development of prompt-enabled systems.
Ailec Granda Dihigo, Yamilka Gómez León, María Teresa Pérez Pino
Los ambientes híbridos de aprendizaje son aquellos en los que convergen el aprendizaje “cara a cara” y el aprendizaje virtual, brindando la oportunidad al alumno de acceder a la información de la mejor manera posible. Constituyen una herramienta de apoyo al docente, con la facilidad de procesar cualquier magnitud de información que tendría que llevar en varias hojas para poder realizar su clase. En este modelo la evaluación asume un rol muy importante, por lo que su diseño es vital en el logro de los objetivos. En este trabajo, se presentan elementos conceptuales relacionados con los ambientes virtuales de aprendizaje como vía de interacción entre los estudiantes y la evaluación en el modelo educativo híbrido. Para ello se responden interrogantes relacionadas con la función de los ambientes virtuales de aprendizaje en la aplicación de este tipo de modelo, la definición de ambientes híbridos de aprendizaje, los elementos esenciales que caracterizan a estos ambientes para la aplicación del modelo educativo híbrido, y el papel y beneficios que obtienen los estudiantes con su aplicación. Se abordan, además, los fundamentos teóricos de la evaluación, sus características en el modelo híbrido y principales desafíos; y se proponen las competencias para el trabajo en los entornos virtuales de aprendizaje y para el desarrollo de la evaluación formativa en el Modelo Educativo Híbrido. Se utilizan métodos teóricos de investigación, entre ellos el analítico sintético y el histórico lógico. Como resultado se obtuvo la conceptualización de estos dos núcleos teóricos, que constituyen un resultado del Proyecto Sectorial: El modelo educativo híbrido: propuestas para la formación continua de docentes universitarios, del Programa Sectorial: “Educación Superior y Desarrollo Sostenible” del Ministerio de Educación Superior.
Mehmet Demirtas, James Halverson, Anindita Maiti
et al.
Both the path integral measure in field theory (FT) and ensembles of neural networks (NN) describe distributions over functions. When the central limit theorem can be applied in the infinite-width (infinite- N ) limit, the ensemble of networks corresponds to a free FT. Although an expansion in $1/N$ corresponds to interactions in the FT, others, such as in a small breaking of the statistical independence of network parameters, can also lead to interacting theories. These other expansions can be advantageous over the $1/N$ -expansion, for example by improved behavior with respect to the universal approximation theorem. Given the connected correlators of a FT, one can systematically reconstruct the action order-by-order in the expansion parameter, using a new Feynman diagram prescription whose vertices are the connected correlators. This method is motivated by the Edgeworth expansion and allows one to derive actions for NN FT. Conversely, the correspondence allows one to engineer architectures realizing a given FT by representing action deformations as deformations of NN parameter densities. As an example, φ ^4 theory is realized as an infinite- N NN FT.
Maria Luisa Mele, Sofia Ubaldi, Cinzia Di Bari
et al.
The demands for Li-ion batteries (LIBs) have recently increased exponentially. They are used in a multitude of applications including electric vehicles (EVs), Energy Storage Systems (ESS) and consumer electronics. However, their chemical composition, high energy content and behaviour under abuse conditions pose a significant risk to safety, human health and the environment. This risk is particularly pronounced with the amount of active materials, especially the organic electrolyte and consequently, with the number of cells constituting the battery. To mitigate the risk, critical points throughout the entire life cycle of a lithium battery must be identified.
For this purpose, the analysis of accidents occurring around the world acquires a fundamental importance. The evaluation of the main risks associated with the transport, use and storage of LIBs would allow the improvement of specific prevention measures to reduce the risk of fire and explosion during their use and their storage. Additionally, the improvement of safety procedures to manage accidents involving lithium ion batteries should be considered. Furthermore, it enables the updating of legal and technical standards and the development of more reliable storage systems. The aim of this study is to enhance current knowledge on the factors that may trigger fire involving LIBs through the analysis of accidents and recall databases analysis. An Italian database has been developed which includes data on accidents that occurred during the normal use of batteries, as well as those that occurred in storage facilities, during transport and in the disposal of batteries. One example of the reconstruction of data necessary to enhance a database is presented in this work.
Chemical engineering, Computer engineering. Computer hardware
Approximate computing emerges as a promising approach to enhance the efficiency of compute-in-memory (CiM) systems in deep neural network processing. However, traditional approximate techniques often significantly trade off accuracy for power efficiency, and fail to reduce data transfer between main memory and CiM banks, which dominates power consumption. This paper introduces a novel probabilistic approximate computation (PAC) method that leverages statistical techniques to approximate multiply-and-accumulation (MAC) operations, reducing approximation error by 4X compared to existing approaches. PAC enables efficient sparsity-based computation in CiM systems by simplifying complex MAC vector computations into scalar calculations. Moreover, PAC enables sparsity encoding and eliminates the LSB activations transmission, significantly reducing data reads and writes. This sets PAC apart from traditional approximate computing techniques, minimizing not only computation power but also memory accesses by 50%, thereby boosting system-level efficiency. We developed PACiM, a sparsity-centric architecture that fully exploits sparsity to reduce bit-serial cycles by 81% and achieves a peak 8b/8b efficiency of 14.63 TOPS/W in 65 nm CMOS while maintaining high accuracy of 93.85/72.36/66.02% on CIFAR-10/CIFAR-100/ImageNet benchmarks using a ResNet-18 model, demonstrating the effectiveness of our PAC methodology.
Robert M. Kent, Wendson A. S. Barbosa, Daniel J. Gauthier
Machine learning provides a data-driven approach for creating a digital twin of a system - a digital model used to predict the system behavior. Having an accurate digital twin can drive many applications, such as controlling autonomous systems. Often the size, weight, and power consumption of the digital twin or related controller must be minimized, ideally realized on embedded computing hardware that can operate without a cloud-computing connection. Here, we show that a nonlinear controller based on next-generation reservoir computing can tackle a difficult control problem: controlling a chaotic system to an arbitrary time-dependent state. The model is accurate, yet it is small enough to be evaluated on a field-programmable gate array typically found in embedded devices. Furthermore, the model only requires 25.0 $\pm$ 7.0 nJ per evaluation, well below other algorithms, even without systematic power optimization. Our work represents the first step in deploying efficient machine learning algorithms to the computing "edge."
Costantino Carugno, Jake Muff, Mikael P. Johansson
et al.
This paper presents the Nordic-Estonian Quantum Computing e-Infrastructure Quest - NordIQuEst - an international collaboration of scientific and academic organizations from Denmark, Estonia, Finland, Norway, and Sweden, working together to develop a hybrid High-Performance and Quantum Computing (HPC+QC) infrastructure. The project leverages existing and upcoming classical high-performance computing and quantum computing systems, facilitating the development of interconnected systems. Our effort pioneers a forward-looking architecture for both hardware and software capabilities, representing an early-stage development in hybrid computing infrastructure. Here, we detail the outline of the initiative, summarizing the progress since the project outset, and describing the framework established. Moreover, we identify the crucial challenges encountered, and potential strategies employed to address them.
Symbolic Computation algorithms and their implementation in computer algebra systems often contain choices which do not affect the correctness of the output but can significantly impact the resources required: such choices can benefit from having them made separately for each problem via a machine learning model. This study reports lessons on such use of machine learning in symbolic computation, in particular on the importance of analysing datasets prior to machine learning and on the different machine learning paradigms that may be utilised. We present results for a particular case study, the selection of variable ordering for cylindrical algebraic decomposition, but expect that the lessons learned are applicable to other decisions in symbolic computation. We utilise an existing dataset of examples derived from applications which was found to be imbalanced with respect to the variable ordering decision. We introduce an augmentation technique for polynomial systems problems that allows us to balance and further augment the dataset, improving the machine learning results by 28\% and 38\% on average, respectively. We then demonstrate how the existing machine learning methodology used for the problem $-$ classification $-$ might be recast into the regression paradigm. While this does not have a radical change on the performance, it does widen the scope in which the methodology can be applied to make choices.
Abstract Oxide scales play a pivotal role in obstructing surface chemical and electrochemical reactions, hence hindering chemo‐mechanical effects such as liquid metal embrittlement of steels. Therefore, the critical conditions and failure mechanism of the oxide film are of major interest in the safe service of steels. Though in situ microscopic methods may directly visualize the failure mechanism, they are often challenged by the lack of statistically reliable evaluation of the critical conditions. Here, by combining in situ scanning electron microscopy with a tapered specimen tensile test in a single experiment, we uniquely achieve a mechanistic study with statistically reliable quantification of the critical strains for each step of the dynamic process of film rupture. This is demonstrated with the oxide films formed on a ferrite–martensite steel in liquid lead–bismuth eutectic alloy at elevated temperatures, with in situ results falling right into the predictions of the statistical analysis. Explicitly, the integrated experimental methodology may facilitate the materials genome engineering of steels with superior service performance.
Materials of engineering and construction. Mechanics of materials, Computer engineering. Computer hardware
Vladyslav Bilozerskyi, Kostyantyn Dergachov, Leonid Krasnov
et al.
Subject of study. In this paper, for the first time, an original method for estimating the change in the brightness of video data under the influence of changes in the lighting conditions of the scene and external noise is proposed. Algorithms for stabilizing the brightness of video data are also proposed. An objective assessment of the quality of video data pre-processed is given. The purpose of the research is to create a methodology for analyzing the variability of video data parameters under the influence of negative factors and to develop effective algorithms for stabilizing the parameters of the received video stream. The reliability of the method is tested using real video recordings pictured through various conditions. Objectives: To determine the most universal, resistant to external influences, and informative indicator necessary for an objective assessment of the quality of video data under various shooting conditions and scene lighting features; develop and programmatically implement algorithms for stabilizing video parameters based on modern programming tools. Research methods. Statistical analysis and pre-processing of video stream parameters as a random spatio-temporal process, algorithms for processing video data by digital filtering, and adaptive stabilization of video stream parameters. Research results. It has been proposed and experimentally proven that the optimal indicator of video stream quality is the average frame brightness (AFB). An algorithm for spatiotemporal processing of video data is proposed that generates a sequence of AFB values from the original video stream. The paper also proposes digital algorithms for filtering and stabilizing the brightness of a video stream and investigates the effectiveness of their application. Conclusions. The scientific novelty of the results obtained lies in a new method for analyzing and evaluating the parameters of video surveillance data and algorithms for filtering and stabilizing the brightness of the video stream. The performance of the proposed algorithms has been tested on real data. The algorithms are implemented in the Python software environment using the functions of the OpenCV library.
Robin G. C. Maack, Jonas Lukasczyk, Julien Tierny
et al.
This paper presents a well-scaling parallel algorithm for the computation of Morse-Smale (MS) segmentations, including the region separators and region boundaries. The segmentation of the domain into ascending and descending manifolds, solely defined on the vertices, improves the computational time using path compression and fully segments the border region. Region boundaries and region separators are generated using a multi-label marching tetrahedra algorithm. This enables a fast and simple solution to find optimal parameter settings in preliminary exploration steps by generating an MS complex preview. It also poses a rapid option to generate a fast visual representation of the region geometries for immediate utilization. Two experiments demonstrate the performance of our approach with speedups of over an order of magnitude in comparison to two publicly available implementations. The example section shows the similarity to the MS complex, the useability of the approach, and the benefits of this method with respect to the presented datasets. We provide our implementation with the paper.
Increasing quantities of semantic resources offer a wealth of human knowledge, but their growth also increases the probability of wrong knowledge base entries. The development of approaches that identify potentially spurious parts of a given knowledge base is therefore highly relevant. We propose an approach for ontology completion that transforms an ontology into a graph and recommends missing edges using structure-only link analysis methods. By systematically evaluating thirteen methods (some for knowledge graphs) on eight different semantic resources, including Gene Ontology, Food Ontology, Marine Ontology, and similar ontologies, we demonstrate that a structure-only link analysis can offer a scalable and computationally efficient ontology completion approach for a subset of analyzed data sets. To the best of our knowledge, this is currently the most extensive systematic study of the applicability of different types of link analysis methods across semantic resources from different domains. It demonstrates that by considering symbolic node embeddings, explanations of the predictions (links) can be obtained, making this branch of methods potentially more valuable than black-box methods.