Paloma Guenes, Rafael Tomaz, Maria Teresa Baldassarre
et al.
The Impostor Phenomenon (IP) impacts a significant portion of the Software Engineering workforce, yet it is often viewed primarily through an internal individual lens. In this position paper, we propose framing the prevalence of IP as a form of Human Debt and discuss the relation with the ICSE2026 Pre Survey on the Future of Software Engineering results. Similar to technical debt, which arises when short-term goals are prioritized over long-term structural integrity, Human Debt accumulates due to gaps in psychological safety and inclusive support within socio-technical ecosystems. We observe that this debt is not distributed equally, it weighs heavier on underrepresented engineers and researchers, who face compounded challenges within traditional hierarchical structures and academic environments. We propose cultural refactoring, transparency and active maintenance through allyship, suggesting that leaders and institutions must address the environmental factors that exacerbate these feelings, ensuring a sustainable ecosystem for all professionals.
Distributed quantum computing (DQC) is a promising technique for scaling up quantum systems. While significant progress has been made in DQC for quantum circuit models, there exists much less research on DQC for measurement-based quantum computing (MBQC), which is a universal quantum computing model that is essentially different from the circuit model and particularly well-suited to photonic quantum platforms. In this paper, we propose DC-MBQC, the first distributed quantum compilation framework tailored for MBQC. We identify and address two key challenges in enabling DQC for MBQC. First, for task allocation among quantum processing units (QPUs), we develop an adaptive graph partitioning algorithm that preserves the structure of the graph state while balancing the workload across QPUs. Second, for inter-QPU communication, we introduce the layer scheduling problem and propose an algorithm to solve it. Regrading realistic hardware requirements, we optimize the execution time of running quantum programs and the corresponding required photon lifetime to avoid fatal failures caused by photon loss. Our experiments demonstrate a $7.46\times$ improvement on required photon lifetime and $6.82\times$ speedup with 8 fully-connected QPUs, which further confirm the advantage of distributed quantum computing in photonic systems. The source code is publicly available at https://github.com/qfcwj/DC-MBQC.
1D data, such as time series, and spectroscopy contain rich information but pose challenges for machine learning, due to limited large, labeled datasets and absence of specialized pretrained neural networks. Existing 1D analysis methods often rely on traditional chemometric approaches and rarely exploit the full potential of online data augmentation, novel architectures, and explainability methods common in image analysis. To address these gaps, a novel approach is proposed that transforms 1D signals into 2D spider plot visualizations, enabling utilization of pretrained deep learning models originally developed for image datasets. The approach also allows transformation of model interpretation maps back to the original variable space, making them more intuitive. The general applicability of this method is demonstrated across multiple data types: Raman spectra, mid‐infrared spectra, electrocardiograms, and mass spectrometry data (MALDI‐IMS). The method achieves competitive performance, reaching a balanced accuracy of 99% in Raman‐based oil identification tasks, surpassing principal component analysis combined with linear discriminant analysis (94%). Performance across datasets reflects variability due to data complexity, highlighting the method's versatility and potential across diverse signal types. This visualization‐based strategy presents an innovative solution to overcome dataset‐size and model‐related limitations while enhancing interpretability in complex 1D data analysis.
Computer engineering. Computer hardware, Control engineering systems. Automatic machinery (General)
Florentia Afentaki, Michael Hefenbrock, Georgios Zervakis
et al.
Printed Electronics (PE) stands out as a promisingtechnology for widespread computing due to its distinct attributes, such as low costs and flexible manufacturing. Unlike traditional silicon-based technologies, PE enables stretchable, conformal,and non-toxic hardware. However, PE are constrained by larger feature sizes, making it challenging to implement complex circuits such as machine learning (ML) classifiers. Approximate computing has been proven to reduce the hardware cost of ML circuits such as Multilayer Perceptrons (MLPs). In this paper, we maximize the benefits of approximate computing by integrating hardware approximation into the MLP training process. Due to the discrete nature of hardware approximation, we propose and implement a genetic-based, approximate, hardware-aware training approach specifically designed for printed MLPs. For a 5% accuracy loss, our MLPs achieve over 5x area and power reduction compared to the baseline while outperforming state of-the-art approximate and stochastic printed MLPs.
This study aims to investigate the thermal stability of biomass piles through a combination of experimental and numerical approaches and explore the impact of particle size and oxygen diffusion. Isothermal basket tests were carried out according to EN-15188:2020 on raw and grinded pellets and sieved dust samples. The progression of the thermal wave inside the baskets was particularly studied by positioning thermocouples at different depths of the pile. The role of oxygen diffusion in the pile was examined by varying the basket size, by modifying the particle size distribution and by partially wrapping the baskets in a protective film. The self-heating behaviour of these piles was also assessed by using the crossing point method. The time evolution of the gases generated was analysed by micro-gas chromatography, especially around the cross-point.
In parallel, a three-dimensional model was developed to simulate the thermal behaviour of a quarter cube. The model includes energy balance, considering conductive, convective and radiative heat transfers and a heat-source term. Mass balances for particle size and each species are also considered through consumption and diffusion terms. Shrinking core models were implemented to represent the consumption of reactants. Moreover, thermogravimetric analyses were performed to identify the various reaction stages and determine the activation energy by using Flynn-Wall-Ozawa, Friedman and Kissinger’s methods.
This study demonstrates the significant influence of bed permeability, especially related to oxygen accessibility, on the thermal stability of storage facilities. Finally, the predictive model developed could be used to explore the efficiency of safety measures and technological solutions (compaction, storage size reduction, bagging...).
Chemical engineering, Computer engineering. Computer hardware
Elisabetta Sieni, Marco Barozzi, Paolo Sgarbossa
et al.
In the framework of sustainability, proposing new technologies to improve the current linear economy system is one of the most challenging aspects for both academics and industries. In this context, the optimization of wastewater recovery and re-use are among the most crucial aspects to improve; in fact, contaminated waters come from a wide range of industries: cooking, refineries, food, pharmaceuticals, textile and agriculture. Heavy metals are among the most critical pollutants, being widely spread (especially in the textile sector) and difficult to remove. In this work, two different sets of Magnetic Nanostructured Adsorbents (MNAs) to clean wastewaters containing Chromium (III), Nickel (II) and Copper (II) ions were studied and compared. The first type of MNA was a 2-D nanosheet structure generated using iron (II/III) salts and sodium (or ammonium) hydroxide solution to decorate a dispersion of graphene oxide (GO) in water. The second type of adsorbent was a 3-D structure composed of GO-MNAs embedded in cross-linked alginate beads. Performed experiments (in a wide range of metal ions concentrations) showed very promising results in terms of removal efficiencies (almost complete abatement could be achieved using a proper amount of MNAs) with respect to all tested contaminants, highlighting better performances of the beads with respect to the corresponding 2-D structure.
Chemical engineering, Computer engineering. Computer hardware
We study the computational expressivity of proof systems with fixed point operators, within the 'proofs-as-programs' paradigm. We start with a calculus muLJ (due to Clairambault) that extends intuitionistic logic by least and greatest positive fixed points. Based in the sequent calculus, muLJ admits a standard extension to a 'circular' calculus CmuLJ. Our main result is that, perhaps surprisingly, both muLJ and CmuLJ represent the same first-order functions: those provably total in $Π^1_2$-$\mathsf{CA}_0$, a subsystem of second-order arithmetic beyond the 'big five' of reverse mathematics and one of the strongest theories for which we have an ordinal analysis (due to Rathjen). This solves various questions in the literature on the computational strength of (circular) proof systems with fixed points. For the lower bound we give a realisability interpretation from an extension of Peano Arithmetic by fixed points that has been shown to be arithmetically equivalent to $Π^1_2$-$\mathsf{CA}_0$ (due to Möllerfeld). For the upper bound we construct a novel computability model in order to give a totality argument for circular proofs with fixed points. In fact we formalise this argument itself within $Π^1_2$-$\mathsf{CA}_0$ in order to obtain the tight bounds we are after. Along the way we develop some novel reverse mathematics for the Knaster-Tarski fixed point theorem.
Context: On top of the inherent challenges startup software companies face applying proper software engineering practices, the non-deterministic nature of machine learning techniques makes it even more difficult for machine learning (ML) startups. Objective: Therefore, the objective of our study is to understand the whole picture of software engineering practices followed by ML startups and identify additional needs. Method: To achieve our goal, we conducted a systematic literature review study on 37 papers published in the last 21 years. We selected papers on both general software startups and ML startups. We collected data to understand software engineering (SE) practices in five phases of the software development life-cycle: requirement engineering, design, development, quality assurance, and deployment. Results: We find some interesting differences in software engineering practices in ML startups and general software startups. The data management and model learning phases are the most prominent among them. Conclusion: While ML startups face many similar challenges to general software startups, the additional difficulties of using stochastic ML models require different strategies in using software engineering practices to produce high-quality products.
Abstract Flexible tactile sensing based on capacitive sensing has become a research hotspot in recent years because of its low energy consumption, high performance and wide application prospects. However, the axis error caused by the coupling deformation of the dielectric will seriously affect the accuracy of the sensor. In this paper, a capacitive flexible three‐axis tactile sensor array is modelled and simulated, and a neural network‐based calibrator for the three‐axis sensor array is proposed, which can be used to calibrate the simulated measurement data. The simulation results show that even though the correlation coefficient of linear regression for each axis is very close to 1, the effect of dielectric nonlinear coupling distortion cannot be eliminated. The calibration method based on the neural network can effectively suppress the nonlinear coupling distortion of the dielectric, and reduce the measurement coupling rate of the sensor model from 26% to 1%. At the same time, in order to ensure the measurement accuracy and robustness of different units in the sensor array, the input layer of the calibrator is expanded, and the data set containing capacitance information and two‐dimensional location information is used for training. The experimental results show that the proposed calibration method combining two‐dimensional position information training accurately calibrates the capacitive flexible three‐dimensional tactile sensor array.
Computer engineering. Computer hardware, Computer applications to medicine. Medical informatics
Hybrid teaching (face-to-face and distance learning) enables students to better prepare and complete their courses. In science, technology, engineering and mathematics, it is important that practical training be an integral part of the curriculum. Laborem project developed at the technological university institute in Bayonne, France, enables undergraduate students to carry out part of their lab experiments in electronics remotely. Started in 2011, Laborem platform was based on proprietary solutions. Since 2017, the platform has migrated to open source software (PyScada) and open source interface box (Laborem Box), which was developed in order to enable the connection of several circuit boards to be studied. These boards, called plugs, are easily interchangeable and enable teachers to quickly adapt the proposed circuits to their course. The software also provides a simple front panel to adapt the human machine interface that is available for students. Laborem Box consists of a 3D printable box, a power supply board, a set of plugs, and a motherboard that enables students to study the selected plug. In addition, a single board computer is embedded and a hard disk can be used if necessary. This paper is intended to describe the hardware and software design of Laborem platform, and to serve as a guide to explain how to duplicate and deploy this system, primarily dedicated to undergraduate students for learning basic electronics.
Aiming at the problems of slow speed and low accuracy in detecting high-resolution traffic sign images in existing networks, a lightweight traffic sign-detection network is proposed.On the basis of MobileNetv3-Large, this study optimizes the backbone of a YOLOv4 network, discards some time-consuming layers according to the characteristics of the dataset, changes the number of output channels of layers 8 and 14, and improves the attention mechanism of Squeeze and Excitation Network (SENet) in the basic module, so that the weight value of the output can more accurately represent the importance of the characteristics.This study adds a dynamic enhanced attachment based on weak semantic segmentation in front of the detection header, and uses its output as the spatial weight distribution to correct the active region, to avoid the problem of false detection and missed detection caused by the decline of extraction ability, and finally form a YOLOv4-SLite network.The sliding window clipping method is used to train and predict high-resolution images, to reduce the training time and increase the diversity of samples.The experimental results on the TT100K traffic sign dataset show that, compared with the YOLOv4 benchmark network, the mAP@0.5 of the YOLOv4-SLite network is lost by 0.2%, but the model size is reduced by 96.5%, and the response speed is increased by 227%.The balance of accuracy and speed achieved meets the expectation.
The massive trend of integrating data-driven AI capabilities into traditional software systems is rising new intriguing challenges. One of such challenges is achieving a smooth transition from the explorative phase of Machine Learning projects - in which data scientists build prototypical models in the lab - to their production phase - in which software engineers translate prototypes into production-ready AI components. To narrow down the gap between these two phases, tools and practices adopted by data scientists might be improved by incorporating consolidated software engineering solutions. In particular, computational notebooks have a prominent role in determining the quality of data science prototypes. In my research project, I address this challenge by studying the best practices for collaboration with computational notebooks and proposing proof-of-concept tools to foster guidelines compliance.
Hongxiang Fan, Thomas Chau, Stylianos I. Venieris
et al.
Attention-based neural networks have become pervasive in many AI tasks. Despite their excellent algorithmic performance, the use of the attention mechanism and feed-forward network (FFN) demands excessive computational and memory resources, which often compromises their hardware performance. Although various sparse variants have been introduced, most approaches only focus on mitigating the quadratic scaling of attention on the algorithm level, without explicitly considering the efficiency of mapping their methods on real hardware designs. Furthermore, most efforts only focus on either the attention mechanism or the FFNs but without jointly optimizing both parts, causing most of the current designs to lack scalability when dealing with different input lengths. This paper systematically considers the sparsity patterns in different variants from a hardware perspective. On the algorithmic level, we propose FABNet, a hardware-friendly variant that adopts a unified butterfly sparsity pattern to approximate both the attention mechanism and the FFNs. On the hardware level, a novel adaptable butterfly accelerator is proposed that can be configured at runtime via dedicated hardware control to accelerate different butterfly layers using a single unified hardware engine. On the Long-Range-Arena dataset, FABNet achieves the same accuracy as the vanilla Transformer while reducing the amount of computation by 10 to 66 times and the number of parameters 2 to 22 times. By jointly optimizing the algorithm and hardware, our FPGA-based butterfly accelerator achieves 14.2 to 23.2 times speedup over state-of-the-art accelerators normalized to the same computational budget. Compared with optimized CPU and GPU designs on Raspberry Pi 4 and Jetson Nano, our system is up to 273.8 and 15.1 times faster under the same power budget.
Un relleno sanitario es una opción muy viable a la hora de administrar correctamente los desechos sólidos de una población en base a un trabajo que requiere ser realizado con un buen plan de operación y mantenimiento, por ello analizamos estos dos aspectos importantes debido a que deben ser muy coordinados y planeados, teniendo como fin brindar al personal encargado la suficiente información para el correcto funcionamiento del relleno sanitario. Para que empiece la etapa de operación es esencial almacenar los desechos sólidos transportados por el equipo de transporte pesado los cuales pasan por la playa de descarga para su vaciado en la celdas y compactarlo en el espacio más reducido posible y respetando las condiciones para un trabajo óptimo. Una vez hecho esto se coloca la capa de cobertura diaria. Para mantener el relleno sanitario en nuevas condiciones se debe realizar el proceso de impermeabilización e filtración, pendientes de fondo, tubería de drenaje interno, chimeneas para venteo de gases, y controles de ciertos factores como; estabilidad, hundimientos, aguas lluvias, polvos, olores entre otros, que puede generar repercusiones a futuro en el relleno sanitario.
Aminullah Mohtar, Anbukarasi Ravi, Wai Shin Ho
et al.
Palm oil mill effluent (POME) is a source of biogas generation that can be a substitute for fossil fuel. High content of biological oxygen demand (BOD) and chemical oxygen demand (COD) of POME has the advantage to produce large amount of biogas through anaerobic digestion. The purpose of this research is to develop a mathematical model to determine the optimal process pathway of biogas, covering from the purification technology to mode of transportation and utilization. A hypothetical case study is conducted to run and test the model, in which different target location with different utilization mode was chosen. The model chose membrane separation of biogas, pipeline transportation to the targeted site, and electricity generation as the optimal pathway for biogas processing and utilization. Sensitivity analysis is performed to determine the impact of product price on the biogas process pathway selection. The sensitivity analysis revealed that the price of Bio-CNG has an impact on the model. Sensitivity analysis suggested the Bio-CNG sales price should be at least 10.4 USD/GJ to be economically feasible.
Chemical engineering, Computer engineering. Computer hardware
WANG Zi, WANG Zhihua, HAN Yong, JIN Jianlong, HUANG Tianming, ZHU Jiang
The security control and production management of the power system are highly dependent on the network communication between the levels of regulatory agencies, and cyberspace security events always threaten the stable operation of the power grid.In order to meet the needs of power monitoring system architecture and network security collaborative protection, a multi-level, deep distributed collaborative defense model is designed and proposed, and a set of implementation methods are given from the perspective of model architecture, technical methods and functional mechanisms of each module.Based on the characteristics of self-defense and cross-domain cooperative defense in the domain, the model cooperates with security protection devices to perform multi-level active collaborative defense from the host layer, security device layer to the network layer by the highest degree of correlation defense decision-making based on the gray correlation decision.Through the analysis, it is found that the model has the capability of real-time monitoring of network security risks, rapid response to security threats, and dynamic handling of cyber security events, which can effectively improve the level of network security protection of power monitoring systems.