A Course on the Introduction to Quantum Software Engineering: Experience Report
Andriy Miranskyy
Quantum computing is increasingly practiced through programming, yet most educational offerings emphasize algorithmic or framework-level use rather than software engineering concerns such as testing, abstraction, tooling, and lifecycle management. This paper reports on the design and first offering of a cross-listed undergraduate--graduate course that frames quantum computing through a software engineering lens, focusing on early-stage competence relevant to software engineering practice. The course integrates foundational quantum concepts with software engineering perspectives, emphasizing executable artifacts, empirical reasoning, and trade-offs arising from probabilistic behaviour, noise, and evolving toolchains. Evidence is drawn from instructor observations, student feedback, surveys, and analysis of student work. Despite minimal prior exposure to quantum computing, students were able to engage productively with quantum software engineering topics once a foundational understanding of quantum information and quantum algorithms, expressed through executable artifacts, was established. This experience report contributes a modular course design, a scalable assessment model for mixed academic levels, and transferable lessons for software engineering educators developing quantum computing curricula.
Sota Voce: Low-Noise Sampling of Sparse Fixed-Weight Vectors
Décio Luiz Gazzoni Filho, Gora Adj, Slim Bettaieb
et al.
Many post-quantum cryptosystems require generating an n-bit binary vector with a prescribed Hamming weight ω, a process known as fixed-weight sampling. When ω = O(n), we call this dense fixed-weight sampling, which commonly appears in lattice-based cryptosystems, like those in the NTRU family. In contrast, code-based cryptosystems typically use sparse fixed-weight sampling with ω = o(n) (e.g., O(√n). Sparse fixed-weight sampling generally involves three constant-time steps to keep the sampled vector secret: 1. sample ω nearly uniform random integers from a series of decreasing intervals; 2. map these integers into a set of ω distinct indices in [0, n), called the support; 3. generate a binary n-bit vector with bits set only at the support indices. Remarkably, some of the core algorithms employed in fixed-weight sampling date back to nearly a century, yet developing efficient and secure techniques remains essential for modern post-quantum cryptographic applications.
In this paper, we present novel algorithms for steps two and three of the fixedweight sampling process. We demonstrate their practical applicability by replacing the current fixed-weight sampling routine in the HQC post-quantum key exchange mechanism, recently selected for NIST standardization. We rigorously prove that our procedures are sound, secure, and introduce little to no bias. Our implementation of the proposed algorithms accelerates step 2 by up to 2.9x and step 3 by up to 5.8x compared to an optimized version of the fixed-weight sampler currently used in HQC. Since fixed-weight sampling constitutes a significant portion of HQC’s execution time, these speedups translate into protocol-level improvements of up to 1.37x, 1.28x and 1.21x for key generation, encapsulation and decapsulation, respectively.
Computer engineering. Computer hardware, Information technology
Extending Resource Constrained Project Scheduling to Mega-Projects with Model-Based Systems Engineering & Hetero-functional Graph Theory
Amirreza Hosseini, Amro M. Farid
Within the project management context, project scheduling serves as an indispensable component, functioning as a fundamental tool for planning, monitoring, controlling, and managing projects more broadly. Although the resource-constrained project scheduling problem (RCPSP) lies at the core of project management activities, it remains largely disconnected from the broader literature on model-based systems engineering (MBSE), thereby limiting its integration into the design and management of complex systems. The original contribution of this paper is twofold. First, the paper seeks to reconcile the RCPSP with the broader literature and vocabulary of model-based systems engineering and hetero-functional graph theory (HFGT). A concrete translation pipeline from an activity-on-node network to a SysML activity diagram, and then to an operand net is constructed. Using this representation, it specializes the hetero-functional network minimum-cost flow (HFNMCF) formulation to the RCPSP context as a systematic means of HFGT for quantitative analysis and proves that the RCPSP is recoverable as a special case of a broader model. Secondly, on an illustrative instance with renewable and non-renewable operands, the specialized HFNMCF, while producing similar schedules, yields explicit explanations of the project states that enable richer monitoring and control. Overall, the framework preserves the strengths of the classical RCPSP while accommodating real-world constraints and enterprise-level decision processes encountered in large, complex megaprojects.
Software Engineering as a Domain to Formalize
Bertrand Meyer
Software engineering concepts and processes are worthy of formal study; and yet we seldom formalize them. This "research ideas" article explores what a theory of software engineering could and should look like. Software engineering research has developed formal techniques of specification and verification as an application of mathematics to specify and verify systems addressing needs of various application domains. These domains usually do not include the domain of software engineering itself. It is, however, a rich domain with many processes and properties that cry for formalization and potential verification. This article outlines the structure of a possible theory of software engineering in the form of an object-oriented model, isolating abstractions corresponding to fundamental software concepts of project, milestone, code module, test and other staples of our field, and their mutual relationships. While the presentation is only a sketch of the full theory, it provides a set of guidelines for how a comprehensive and practical Theory of Software Engineering should (through an open-source community effort) be developed.
Coding With AI: From a Reflection on Industrial Practices to Future Computer Science and Software Engineering Education
Hung-Fu Chang, MohammadShokrolah Shirazi, Lizhou Cao
et al.
Recent advances in large language models (LLMs) have introduced new paradigms in software development, including vibe coding, AI-assisted coding, and agentic coding, fundamentally reshaping how software is designed, implemented, and maintained. Prior research has primarily examined AI-based coding at the individual level or in educational settings, leaving industrial practitioners' perspectives underexplored. This paper addresses this gap by investigating how LLM coding tools are used in professional practice, the associated concerns and risks, and the resulting transformations in development workflows, with particular attention to implications for computing education. We conducted a qualitative analysis of 57 curated YouTube videos published between late 2024 and 2025, capturing reflections and experiences shared by practitioners. Following a filtering and quality assessment process, the selected sources were analyzed to compare LLM-based and traditional programming, identify emerging risks, and characterize evolving workflows. Our findings reveal definitions of AI-based coding practices, notable productivity gains, and lowered barriers to entry. Practitioners also report a shift in development bottlenecks toward code review and concerns regarding code quality, maintainability, security vulnerabilities, ethical issues, erosion of foundational problem-solving skills, and insufficient preparation of entry-level engineers. Building on these insights, we discuss implications for computer science and software engineering education and argue for curricular shifts toward problem-solving, architectural thinking, code review, and early project-based learning that integrates LLM tools. This study offers an industry-grounded perspective on AI-based coding and provides guidance for aligning educational practices with rapidly evolving professional realities.
Vision-Proprioception Fusion with Mamba2 in End-to-End Reinforcement Learning for Motion Control
Xiaowen Tao, Yinuo Wang, Jinzhao Zhou
End-to-end reinforcement learning (RL) for motion control trains policies directly from sensor inputs to motor commands, enabling unified controllers for different robots and tasks. However, most existing methods are either blind (proprioception-only) or rely on fusion backbones with unfavorable compute-memory trade-offs. Recurrent controllers struggle with long-horizon credit assignment, and Transformer-based fusion incurs quadratic cost in token length, limiting temporal and spatial context. We present a vision-driven cross-modal RL framework built on SSD-Mamba2, a selective state-space backbone that applies state-space duality (SSD) to enable both recurrent and convolutional scanning with hardware-aware streaming and near-linear scaling. Proprioceptive states and exteroceptive observations (e.g., depth tokens) are encoded into compact tokens and fused by stacked SSD-Mamba2 layers. The selective state-space updates retain long-range dependencies with markedly lower latency and memory use than quadratic self-attention, enabling longer look-ahead, higher token resolution, and stable training under limited compute. Policies are trained end-to-end under curricula that randomize terrain and appearance and progressively increase scene complexity. A compact, state-centric reward balances task progress, energy efficiency, and safety. Across diverse motion-control scenarios, our approach consistently surpasses strong state-of-the-art baselines in return, safety (collisions and falls), and sample efficiency, while converging faster at the same compute budget. These results suggest that SSD-Mamba2 provides a practical fusion backbone for resource-constrained robotic and autonomous systems in engineering informatics applications.
A comprehensive review of sensor technologies, instrumentation, and signal processing solutions for low-power Internet of Things systems with mini-computing devices
Alexandros Gazis, Ioannis Papadongonas, Athanasios Andriopoulos
et al.
This article provides a comprehensive overview of sensors commonly used in low-cost, low-power systems, focusing on key concepts such as Internet of Things (IoT), Big Data, and smart sensor technologies. It outlines the evolving roles of sensors, emphasizing their characteristics, technological advancements, and the transition toward "smart sensors" with integrated processing capabilities. The article also explores the growing importance of mini-computing devices in educational environments. These devices provide cost-effective and energy-efficient solutions for system monitoring, prototype validation, and real-world application development. By interfacing with wireless sensor networks and IoT systems, mini-computers enable students and researchers to design, test, and deploy sensor-based systems with minimal resource requirements. Furthermore, this article examines the most widely used sensors, detailing their properties and modes of operation to help readers understand how sensor systems function. The aim of this study is to provide an overview of the most suitable sensors for various applications by explaining their uses and operations in simple terms. This clarity will assist researchers in selecting the appropriate sensors for educational and research purposes or understanding why specific sensors were chosen, along with their capabilities and possible limitations. Ultimately, this research seeks to equip future engineers with the knowledge and tools needed to integrate cutting-edge sensor networks, IoT, and Big Data technologies into scalable, real-world solutions.
Empowering text classification with NLP and explainable AI for enhanced interpretability
Sumaya Mustafa, Mariwan Hama Saeed
Abstract Artificial intelligence (AI) models have demonstrated significant success in classifying various types of text. However, the complex nature of these models often complicates the interpretability of their classifications. To address these challenges and to enhance explainability, this study proposes a novel approach to text classification leveraging natural language processing (NLP) techniques and explainable AI (XAI) methods. Text preprocessing steps were essential for improving the quality of text analysis. This was gained by eliminating elements that contribute minimal semantic value. To achieve robust performance and mitigate the risk of overfitting, repeated stratified K-Fold cross-validation was utilized. Furthermore, the synthetic minority oversampling technique (SMOTE) was employed to address dataset imbalance issues. In the classification phase, nine machine learning models and hybrid/multi-model approaches were employed. To validate the explainability of the classifications, the local interpretable model-agnostic explanations (LIME) framework was utilized. The study utilized two datasets containing texts from domains such as sports, medicine, entertainment, politics, technology, and business. Empirical evaluations demonstrated the effectiveness of the proposed approach. The proposed hybrid model achieved exceptional performance across key metrics, including accuracy, precision, recall, and F1-score. The proposed hybrid model achieved results of up to 99% accuracy. This work can be used for various text analysis applications.
Electrical engineering. Electronics. Nuclear engineering, Information technology
On homomorphic encryption based strategies for class imbalance in federated learning
Arpit Guleria, Harshan Jagadeesh, Ranjitha Prasad
et al.
Abstract Class imbalance in training datasets can lead to bias and poor generalization in machine learning models. While pre-processing of training datasets is an efficient way to address both these issues in centralized learning environments, it is challenging to detect and address these issues in a distributed learning environment such as federated learning. In this paper, we propose FLICKER, a privacy preserving framework to address issues related to global class imbalance in federated learning. At the heart of our contribution lies the popular Cheon-Kim-Kim-Song (CKKS) homomorphic encryption scheme, which is used by the clients to privately share their data attributes, and subsequently balance their datasets before implementing the federated learning scheme. Extensive experimental results show that our proposed method improves the federated learning accuracy numbers by up to 8 $$\%$$ when used along with popular datasets and relevant baselines.
Information technology, Electronic computers. Computer science
Advances in Additive Friction Extrusion Deposition (AFED): Process and Tool Design
Max Hossfeld, Arnold Wright
Additive friction extrusion deposition (AFED) is a recently developed additive manufacturing technique that promises high deposition rates at low forces. Due to the novelty of the process, the underlying phenomena and their interactions are not fully understood, and in particular, the processing strategy and tool design are still in their infancy. This work contributes to the state-of-the-art of AFED through a comprehensive analysis of its working principles and an experimental program, including a representative sample component. The working principle and process mechanics of AFED are broken down into their individual components. The forces and their origins and effects on the process are described, and measures of process efficiency and theoretical minimum energy consumption are derived. Three geometrical features of the extrusion die were identified as most relevant to the active material flow, process forces, and deposition quality: the topography of the inner and outer circular surfaces and the geometry of its extrusion channels. Based on this, the experimental program investigated seven different tool designs in terms of efficiency, force reduction, and throughput. The experiments using AA 6061-T6 as feedstock show that AFED is capable of both high material throughput (close to 550 mm<sup>3</sup>/s) and reduced substrate forces, for example, the forces for a run at 100 mm<sup>3</sup>/s remained continuously below 500 N and for a run at 400 mm<sup>3</sup>/s below 3500 N. The material flow and microstructure of AFED were assessed from macro-sections. Significant differences were found between the advancing and retracting sides for both process effects and material flow. Banded structures in the microstructure show strong similarities to other solid-state processes. The manufacturing of the sample components demonstrates that AFED is already capable of producing industrial-grade components. In mechanical tests, interlayer bonding defects resulted in more brittle failure behavior in the build direction of the structure, whereas in the horizontal direction, mechanical properties corresponding to a T4 temper were achieved.
Production capacity. Manufacturing capacity
Urban delineation through a prism of intraday commute patterns
Yuri Bogomolov, Alexander Belyi, Stanislav Sobolevsky
et al.
IntroductionUrban mobility patterns are crucial for effective urban and transportation planning. This study investigates the dynamics of urban mobility in Brno, Czech Republic, utilizing the rich dataset provided by passive mobile phone data. Understanding these patterns is essential for optimizing infrastructure and planning strategies.MethodsWe developed a methodological framework that incorporates bidirectional commute flows and integrates both urban and suburban commute networks. This comprehensive approach allows for a detailed representation of Brno's mobility landscape. By employing clustering techniques, we aimed to identify distinct mobility patterns within the city.ResultsOur analysis revealed consistent structural features within Brno's mobility patterns. We identified three distinct clusters: a central business district, residential communities, and an intermediate hybrid cluster. These clusters highlight the diversity of mobility demands across different parts of the city.DiscussionThe study demonstrates the significant potential of passive mobile phone data in enhancing our understanding of urban mobility patterns. The insights gained from intraday mobility data are invaluable for transportation planning decisions, allowing for the optimization of infrastructure utilization. The identification of distinct mobility patterns underscores the practical utility of our methodological advancements in informing more effective and efficient transportation planning strategies.
On the solvability of some parabolic equations involving nonlinear boundary conditions with L^{1} data
Laila Taourirte, Abderrahim Charkaoui, Nour Eddine Alaa
We analyze the existence of solutions for a class of quasilinear parabolic equations with critical growth nonlinearities, nonlinear boundary conditions, and \(L^1\) data. We formulate our problems in an abstract form, then using some techniques of functional analysis, such as Leray-Schauder's topological degree associated with the truncation method and very interesting compactness results, we establish the existence of weak solutions to the proposed models.
Applied mathematics. Quantitative methods
Simulation-driven engineering for the management of harmful algal and cyanobacterial blooms
José L. Risco-Martín, Segundo Esteban, Jesús Chacón
et al.
Harmful Algal and Cyanobacterial Blooms (HABs), occurring in inland and maritime waters, pose threats to natural environments by producing toxins that affect human and animal health. In the past, HABs have been assessed mainly by the manual collection and subsequent analysis of water samples and occasionally by automatic instruments that acquire information from fixed locations. These procedures do not provide data with the desirable spatial and temporal resolution to anticipate the formation of HABs. Hence, new tools and technologies are needed to efficiently detect, characterize and respond to HABs that threaten water quality. It is essential nowadays when the world's water supply is under tremendous pressure because of climate change, overexploitation, and pollution. This paper introduces DEVS-BLOOM, a novel framework for real-time monitoring and management of HABs. Its purpose is to support high-performance hazard detection with Model Based Systems Engineering (MBSE) and Cyber-Physical Systems (CPS) infrastructure for dynamic environments.
MATHEMATICAL AND COMPUTER MODELS OF THE COVID-19 EPIDEMIC
Indira Uvaliуeva, Saule Belginova, Sanzhar Sovetbekov
The COVID-19 epidemic has gone down in history as an emergency of international importance. Currently, the number of people infected with coronavirus around the world continues to grow, and modeling such a complex system as the spread of infection is one of the most pressing problems. Various models are used to understand the progress of the COVID-19 coronavirus epidemic and to plan effective control strategies. Such models require the use of advanced computing, such as artificial intelligence, machine learning, cloud computing, and edge computing. This article uses the SIR mathematical model, which is often used and simple to model the prevalence of COVID-19 infection. The SIR model can provide a theoretical basis for studying the prevalence of the COVID-19 virus in a specific population and an understanding of the temporal evolution of the virus. One of the main advantages of this model is the ease of adjusting the sampling parameters as the study scale increases and the most appropriate graphs between the data and the resulting assumptions. Computer models based on the mathematical SIR model of the spread of the COVID-19 epidemic make it possible to estimate the number of possible deaths in the future. In addition, on the basis of the proposed models, it will be possible to assess the effectiveness of measures taken to prevent infection by comparing published data with forecasts. Computer models in Python are created on the basis of the proposed mathematical apparatus of SIR. The following libraries were added in the Python high-level programming language for the numerical solution of the system of differential equations for the SIR model: NumPy, Matplotlib PyPlot and the Integrate module from the SciPy library.
A New Subject-Sensitive Hashing Algorithm Based on MultiRes-RCF for Blockchains of HRRS Images
Kaimeng Ding, Shiping Chen, Jiming Yu
et al.
Aiming at the deficiency that blockchain technology is too sensitive to the binary-level changes of high resolution remote sensing (HRRS) images, we propose a new subject-sensitive hashing algorithm specially for HRRS image blockchains. To implement this subject-sensitive hashing algorithm, we designed and implemented a deep neural network model MultiRes-RCF (richer convolutional features) for extracting features from HRRS images. A MultiRes-RCF network is an improved RCF network that borrows the MultiRes mechanism of MultiResU-Net. The subject-sensitive hashing algorithm based on MultiRes-RCF can detect the subtle tampering of HRRS images while maintaining robustness to operations that do not change the content of the HRRS images. Experimental results show that our MultiRes-RCF-based subject-sensitive hashing algorithm has better tamper sensitivity than the existing deep learning models such as RCF, AAU-net, and Attention U-net, meeting the needs of HRRS image blockchains.
Industrial engineering. Management engineering, Electronic computers. Computer science
Reducing the False Negative Rate in Deep Learning Based Network Intrusion Detection Systems
Jovana Mijalkovic, Angelo Spognardi
Network Intrusion Detection Systems (NIDS) represent a crucial component in the security of a system, and their role is to continuously monitor the network and alert the user of any suspicious activity or event. In recent years, the complexity of networks has been rapidly increasing and network intrusions have become more frequent and less detectable. The increase in complexity pushed researchers to boost NIDS effectiveness by introducing machine learning (ML) and deep learning (DL) techniques. However, even with the addition of ML and DL, some issues still need to be addressed: high false negative rates and low attack predictability for minority classes. Aim of the study was to address these problems that have not been adequately addressed in the literature. Firstly, we have built a deep learning model for network intrusion detection that would be able to perform both binary and multiclass classification of network traffic. The goal of this base model was to achieve at least the same, if not better, performance than the models observed in the state-of-the-art research. Then, we proposed an effective refinement strategy and generated several models for lowering the FNR and increasing the predictability for the minority classes. The obtained results proved that using the proper parameters is possible to achieve a satisfying trade-off between FNR, accuracy, and detection of the minority classes.
Industrial engineering. Management engineering, Electronic computers. Computer science
Educational Digital Escape Rooms Footprint on Students’ Feelings: A Case Study within Aerospace Engineering
Luis M. Sánchez-Ruiz, Salvador López-Alfonso, Santiago Moll-López
et al.
The introduction of game-based learning techniques has significantly swayed learning, motivation, and information processing in both traditional and digital learning environments. This paper studies the footprint that the implementation of ten short-duration digital escape rooms has had on the creation of an environment of positive emotions in the educational field. The digital escape rooms were created by employing the Genial.ly platform and RPG Maker MZ software. A feelings/satisfaction questionnaire has been conducted to study what emotions students have experienced, as well as the students’ opinions about essential elements of digital escape rooms, to study whether positive feelings predominate in the performance of these activities. Results show a high incidence of positive emotions, and a very favorable opinion on the tools employed and the positive feelings on the acquisition of knowledge and skills.
Teaching Informatics to Adults of Vocational Schools during the Pandemic: Students’ Views and the Role of Neuroeducation
Spyridon Doukakis, Maria Niari, Elen Malliou
et al.
The COVID-19 pandemic has had an extremely significant impact on the educational process. The need to continue the educational practice, albeit the restrictions imposed in movement, led to a change in the way students participate and learn, as well as in the way educators teach and communicate. The aim of the present research study was to record the perspectives and views of adult students of evening upper secondary schools of the informatics sector, in relation to the challenges, experiences and learning involvement in the online courses. The study was conducted using a questionnaire with open-ended questions given to all students of an evening vocational upper secondary school in a semi-urban island region. The findings show that the way students are taught, the distractions and responsibilities of the students as well as their feelings concerning the pandemic, were the major challenges they faced. Those challenges affected both their involvement and learning experiences from the educational process. Finally, it seems that the low interaction among students and educators, the technical difficulties, and the lack of a structured learning framework have had an impact on the effectiveness of online education, according to educational neuroscience principles.
Thermal Properties of Insulating Materials made from Hemp Fibres
Nistorac Andreea, Ailenei Eugen Constantin, Isopescu Dorina Nicolina
et al.
The ecological footprint of residential buildings has seen a significant increase due to the permanent desire to improve comfort and aesthetics. The construction and rehabilitation of the existing building stock in accordance with the European standards known as “passive house”, implies the minimization of external energy consumption. Thus, the thermal insulation materials used to reduce heat transfer play an essential role in the effort to reduce the environmental footprint generated by residential buildings. Hemp is an important candidate for making heat-insulating materials. Hemp stems find their use as basic fibres inside non-woven materials used to produce heat-insulating materials with applications in the field of construction or composite materials. The energy balance between the amount of energy saved and that of obtaining heat-insulating materials, as well as the type of raw materials used for their manufacture, is a critical factor for achieving the goal of reducing ecological impact.
Architecture, Engineering (General). Civil engineering (General)
Non-Hermitian physics and engineering in silicon photonics
Changqing Wang, Zhoutian Fu, Lan Yang
Silicon photonics has been studied as an integratable optical platform where numerous applicable devices and systems are created based on modern physics and state-of-the-art nanotechnologies. The implementation of quantum mechanics has been the driving force of the most intriguing design of photonic structures, since the optical systems are found of great capability and potential in realizing the analogues of quantum concepts and phenomena. Non-Hermitian physics, which breaks the conventional scope of quantum mechanics based on Hermitian Hamiltonian, has been widely explored in the platform of silicon photonics, with promising design of optical refractive index, modal coupling and gain-loss distribution. As we will discuss in this chapter, the unconventional properties of exceptional points and parity-time symmetry realized in silicon photonics have created new opportunities for ultrasensitive sensors, laser engineering, control of light propagation, topological mode conversion, etc. The marriage between the quantum non-Hermiticity and classical silicon platforms not only spurs numerous studies on the fundamental physics, but also enriches the potential functionalities of the integrated photonic systems.
en
physics.optics, eess.SY