H. Sinan Bank, Daniel R. Herber, Thomas H. Bradley
Engineering system design -- whether mechatronic, control, or embedded -- often proceeds in an ad hoc manner, with requirements left implicit and traceability from intent to parameters largely absent. Existing specification-driven and systematic design methods mostly target software, and AI-assisted tools tend to enter the workflow at solution generation rather than at problem framing. Human--AI collaboration in the design of physical systems remains underexplored. This paper presents Design-OS, a lightweight, specification-driven workflow for engineering system design organized in five stages: concept definition, literature survey, conceptual design, requirements definition, and design definition. Specifications serve as the shared contract between human designers and AI agents; each stage produces structured artifacts that maintain traceability and support agent-augmented execution. We position Design-OS relative to requirements-driven design, systematic design frameworks, and AI-assisted design pipelines, and demonstrate it on a control systems design case using two rotary inverted pendulum platforms -- an open-source SimpleFOC reaction wheel and a commercial Quanser Furuta pendulum -- showing how the same specification-driven workflow accommodates fundamentally different implementations. A blank template and the full design-case artifacts are shared in a public repository to support reproducibility and reuse. The workflow makes the design process visible and auditable, and extends specification-driven orchestration of AI from software to physical engineering system design.
The flow and heat transfer characteristics of helically-coiled tubes are crucial for the design of spiral tube steam generators. In this paper, the flow and heat transfer characteristics of a vertical helically-coiled tube with an inner diameter of 8.8 mm and a helical diameter of 568 mm were experimentally investigated in a wide pressure range: 0.2-14.1 MPa. The mass flow rate is 49-1 902 kg/(m2·s). The heat flux of the experimental section is 14.5-580 kW/m2. In this experiment, the flow rate of the main loop is adjusted by the valve opening, the system pressure is adjusted by the high-pressure nitrogen cylinder, the metering pump and the pressure relief valve, the inlet fluid parameters of the experimental section is adjusted by the input power of the preheater and the direct-current (DC) voltage loaded in the preheating section, and the heating heat flux is adjusted by the DC voltage loaded in the experimental section. Finally, the friction coefficients of single-phase and two-phase, as well as heat transfer coefficients of single-phase, subcooled boiling, saturated boiling and dry out under different working conditions were obtained. Comparison and analysis of the experimental results with the empirical relational formulas of recent years revealed that the empirical formulas of Akagawa’s, Hart’s, and Ito’s predicted the single-phase friction coefficients with high accuracy within ±5%. The secondary flow increases the critical Reynolds number, which is about 10 000 in this experiment. In the range of straight tube laminar flow (Re<2 300), the influence of secondary flow is greater. When the Reynolds number increases, the energy dissipation of secondary flow to the turbulent region is much smaller than that of laminar flow region. The current empirical formula has at least 10%-20% deviation in predicting the two-phase friction coefficient and the heat transfer coefficient in different regions. The relative average deviation between the friction coefficient of the two phases and the calculation formulas of Chen, Guo, Ferraris and M-N is about ±20%, and the prediction accuracy difference between the empirical formulas of spiral tube and straight tube is not obvious. The heat transfer coefficient of single-phase water section has the smallest relative average deviation from Guo et al. empirical formula, which is 18.2%. The relative average deviation between the heat transfer coefficient of the subcooled boiling zone and Hardik’s empirical formula is the smallest, which is −21.1%. The heat transfer coefficient of saturated boiling region has the smallest relative average deviation from the modified Chen’s formula, which is 7.5%. The relative average deviation of heat transfer coefficient of dry zone from Gao’s empirical formula is the least, which is 17.9%. The results of the analysis can provide a reference for the design of helically-coiled tube steam generators.
Jonathan C. Marcks, Benjamin Pingault, Jiefei Zhang
et al.
Semiconductors are the backbone of modern technology, garnering decades of investment in high quality materials and devices. Electron spin systems in semiconductors, including atomic defects and quantum dots, have been demonstrated in the last two decades to host quantum coherent spin qubits, often with coherent spin-photon interfaces and proximal nuclear spins. These systems are at the center of developing quantum technology. However, new material challenges arise when considering the isotopic composition of host and qubit systems. The isotopic composition governs the nature and concentration of nuclear spins, which naturally occur in leading host materials. These spins generate magnetic noise -- detrimental to qubit coherence -- but also show promise as local quantum memories and processors, necessitating careful engineering dependent on the targeted application. Reviewing recent experimental and theoretical progress towards understanding local nuclear spin environments in semiconductors, we show this aspect of material engineering as critical to quantum information technology.
Baharak Ahmaderaghi, Esha Barlaskar, O. Pishchukhina
et al.
Computer science including data analytics is a widely popular field, boasting promising career opportunities in the future. Proficiency in programming stands as a fundamental requirement for success in this domain. However, students entering MSc programs in data analytics often possess varying levels of programming background, which can impact their performance in assignments. Recognising and addressing these differences through tailored instruction can improve students’ outcomes. This paper explores the importance of considering students' programming backgrounds in the data analytics field and highlights strategies to enhance their performance based on prior knowledge. This study was carried out on two different modules in two different pathways. We have chosen two distinct cohorts and pathways to ensure unbiased conclusions in our study. The initial research was applied to the Database and Programming Fundamentals module for an MSc data analytics cohort, and then we utilized a Deep Learning module for final year computer science undergraduates as a validation cohort. As a conclusion, this study successfully demonstrated a significant increase in student assignment performance through the implementation of tailored instruction based on students' programming backgrounds. Despite receiving positive student feedback and observing excellent and improved performances, it is crucial to acknowledge instances of unsatisfactory student performance as well. Both studies were conducted by the School of Electronics, Electrical Engineering, and Computer Science (EEECS) at Queen's University Belfast (QUB) during the academic year 2021/2022.
Evacuation lighting is a crucial component of cinema safety, significantly impacting operational safety and evacuation efficiency. It plays a key role in enhancing evacuation measures and ensuring the safety of cinema patrons. An experiment utilizing virtual reality technology was conducted at Beijing Forestry University with 62 subjects randomly assigned to either a control group or an experimental group. The experimental group was guided by a green flashing light as an evacuation indicator, while the control group relied on static lighting. Although some subjects overlooked the green flashing light, its presence still reduced the number of subjects choosing misleading exits. The flashing light notably improved pathfinding efficiency and evacuation performance, with the experimental group achieving an average evacuation time approximately 30% shorter than the control group. Additionally, subjects rated the sensory, cognitive, and functional aspects of the flashing light lighting from moderate to high. The findings indicate that dynamic and flashing evacuation lighting can effectively enhance fire escape efficiency in cinemas. The design of such systems should consider individual psychological responses and actual behavior patterns to optimize emergency evacuation instructions.
Large Language Models (LLMs) have significantly advanced software engineering (SE) tasks, with prompt engineering techniques enhancing their performance in code-related areas. However, the rapid development of foundational LLMs such as the non-reasoning model GPT-4o and the reasoning model o1 raises questions about the continued effectiveness of these prompt engineering techniques. This paper presents an extensive empirical study that reevaluates various prompt engineering techniques within the context of these advanced LLMs. Focusing on three representative SE tasks, i.e., code generation, code translation, and code summarization, we assess whether prompt engineering techniques still yield improvements with advanced models, the actual effectiveness of reasoning models compared to non-reasoning models, and whether the benefits of using these advanced models justify their increased costs. Our findings reveal that prompt engineering techniques developed for earlier LLMs may provide diminished benefits or even hinder performance when applied to advanced models. In reasoning LLMs, the ability of sophisticated built-in reasoning reduces the impact of complex prompts, sometimes making simple zero-shot prompting more effective. Furthermore, while reasoning models outperform non-reasoning models in tasks requiring complex reasoning, they offer minimal advantages in tasks that do not need reasoning and may incur unnecessary costs. Based on our study, we provide practical guidance for practitioners on selecting appropriate prompt engineering techniques and foundational LLMs, considering factors such as task requirements, operational costs, and environmental impact. Our work contributes to a deeper understanding of effectively harnessing advanced LLMs in SE tasks, informing future research and application development.
Accurate fault detection and classification help to analyze fault causes and quickly restore faulty phases. Deep learning can automatically extract fault features and identify fault types from the original three-phase voltage and current signals. However, this still imposes challenges such as recognition accuracy and computational complexity. More importantly, high level fault features cannot be extracted in the one-dimensional time series. This paper presents a robust fault classification method based on SA-MobileNetV3 for transmission systems. Considering that the SE (Squeeze-and-Excitation) attention module cannot aggregate the spatial dimension information on the channel, SA (shuffle attention) module is introduced into MobileNetV3, which can effectively fuse the importance of pixels in different channels and in different locations at the same channel. Also, transforming the time series three-phase voltage and current signals into two-dimensional images based on CWT (continuous wavelet transform) makes the proposed method be similar to image recognition, which can mine high level fault features and classify the faults visually. To verify the effectiveness of the method, a 735kV transmission line model is built for data generation through Simulink. Various kinds of fault conditions and factors are considered to verify the adaptability and generalizability. Simulation results show that the method can quickly and accurately identify 11 types of faults, and the accuracy rate is as high as 99.90%. A comparison between the proposed method and other existing techniques shows the superiority of the proposed SA- MobileNetV3, and better anti-noise performance makes it more suitable for real fault signals taken on-site.
Ruiyun Fu, Mary E. Lichtenwalner, Thomas J. Johnson
With the increasing installations of solar energy, electric vehicles, and other distributed energy resources and the deeper developments of digitalization and standardization, cybersecurity became more and more essential and critical in modern power systems. Unfortunately, most prior research work focuses on the cybersecurity of power transmission and distribution networks other than distributed energy devices and their grid-connected power converters. Focusing on the Grid-Connected Power Electronics Converters (GCPECs), this article does a comprehensive review of existing outcomes from selected references, in the aspects of vulnerabilities, countermeasures, and testbeds. By analyzing the GCPEC’s layout and countermeasure candidates, it is found that the vulnerabilities of GCPECs include both cyber and physical layers that are easily accessible to malicious hackers. These vulnerabilities in the two layers must be considered simultaneously and coordinate well with each other. Especially, hardware hardening is an essential approach to enhance cybersecurity within GCPECs. It is also noticed that the detection and mitigation approaches should consider the complexity of algorithms to be applied and assess the limits of computing and data processing capabilities in GCPECs while evaluating the feasibility of countermeasure candidates to cyberattacks in testbeds. In addition, the countermeasures should meet relevant standards, such as IEEE-1547.1, IEEE-2030.5, IEC-61850, and IEC-62351, to ensure the interoperability and cybersecurity of GCPEC devices in smart grids. Finally, based on the review and analysis, four recommendations are raised for future research on GCPEC’s cybersecurity and their applications in smart grids.
Predicting the maximum available frequency of short-wave communication presents the challenges of low prediction accuracy of classical prediction model methods and difficulty in obtaining training set data for machine learning prediction methods.To address this issue, a model-data dual-driven bidirectional gated recurrent unit (BiGRU) network for short-term prediction of MUF was proposed.On the model-driven, a large-scale dataset generated by the classical MUF prediction model was used as the model-driven training set, and a preliminary network was obtained after joint learning of the 2D CNN and the BiGRU network.On the data-driven, the preliminary network was trained twice using a small-scale measured dataset to obtain the final network CNN-BiGRU-NN.The simulation results show that the proposed network has reduced average root mean squared error (RMSE) at both daily and momentary scales compared with the GRU network, LSTM network and VOACAP model.
Quantum software engineering (QSE) is receiving increasing attention, as evidenced by increasing publications on topics, e.g., quantum software modeling, testing, and debugging. However, in the literature, quantum software requirements engineering (QSRE) is still a software engineering area that is relatively less investigated. To this end, in this paper, we provide an initial set of thoughts about how requirements engineering for quantum software might differ from that for classical software after making an effort to map classical requirements classifications (e.g., functional and extra-functional requirements) into the context of quantum software. Moreover, we provide discussions on various aspects of QSRE that deserve attention from the quantum software engineering community.
Alexander E. I. Brownlee, James Callan, Karine Even-Mendoza
et al.
Large language models (LLMs) have been successfully applied to software engineering tasks, including program repair. However, their application in search-based techniques such as Genetic Improvement (GI) is still largely unexplored. In this paper, we evaluate the use of LLMs as mutation operators for GI to improve the search process. We expand the Gin Java GI toolkit to call OpenAI's API to generate edits for the JCodec tool. We randomly sample the space of edits using 5 different edit types. We find that the number of patches passing unit tests is up to 75% higher with LLM-based edits than with standard Insert edits. Further, we observe that the patches found with LLMs are generally less diverse compared to standard edits. We ran GI with local search to find runtime improvements. Although many improving patches are found by LLM-enhanced GI, the best improving patch was found by standard GI.
Niccolo Biasi, Paolo Seghetti, Matteo Mercati
et al.
The purpose of this manuscript is to develop a reaction-diffusion heart model for closed-loop evaluation of heart-pacemaker interaction, and to provide a hardware setup for the implementation of the closed-loop system. The heart model, implemented on a workstation, is based on the cardiac monodomain formulation and a phenomenological model of cardiac cells, which we fitted to the electrophysiological properties of the different cardiac tissues. We modelled the pacemaker as a timed automaton, deployed on an Arduino 2 board. The Arduino and the workstation communicate through a PCI acquisition board. Additionally, we developed a graphical user interface for easy handling of the framework. The myocyte model resembles the electrophysiological properties of atrial and ventricular tissue. The heart model reproduces healthy activation sequence and proved to be computationally efficient (i.e., 1 s simulation requires about 5 s). Furthermore, we successfully simulated the interaction between heart and pacemaker models in three well-known pathological contexts. Our results showed that the PDE formulation is appropriate for the simulation in closed-loop. While computationally more expensive, a PDE model is more flexible and allows to represent more complex scenarios than timed or hybrid automata. Furthermore, users can interact more easily with the framework thanks to the graphical representation of the spatiotemporal evolution of the membrane potentials. By representing the heart as a reaction-diffusion model, the proposed closed-loop system provides a novel and promising framework for the assessment of cardiac pacemakers.
To address the problem that existing deep learning methods are not sufficiently accurate to detect rice pests with changeable shapes or similar appearances, a self-attention feature fusion model for rice pest detection (SAFFPest) was proposed. The model was based on VarifocalNet. First, a deformable convolution module was added to the feature extraction network, to improve the feature extraction ability of pests with changeable shapes. Second, by obtaining the balance features of multiple feature maps, the self-attention mechanism was introduced to refine the balance feature, in order to better restore the semantic information of some pests with similar appearances. Subsequently, the group normalization method was used to replace the batch normalization method in the original model, to reduce the impact of batch size on model training. The IP102 rice pest dataset was used to train and verify this model. The experimental results showed that the model can accurately detect nine kinds of rice pests, such as rice leaf rollers and rice leaf caterpillars. Compared with FasterRCNN, RetinaNet, CP-FCOS, VFNet and BiFA-YOLO, the mean average precision of the model improved by 33.7%, 6.5%, 4.5%, 2.9% and 2% respectively.
Fast proton conductors are important materials for catalysis and energy conversion applications. The glassy coordination polymers are an important class of proton conductors due to their good mechanical moldability; however, their conductivity has been limited to ca. 10 mS cm−1 at 100 °C. The systematic design of coordination polymers with fast proton conduction requires an atomistic simulation method that can describe long-range proton diffusion within an affordable computational time. The methodologies of atomistic simulations are separately limited and cannot fairly describe the long-range proton conduction in non-crystalline materials. In this work, we develop a hybrid approach that combines the molecular dynamics based on a conventional force-field and the kinetic Monte Carlo method, which allows for the large-scale (thousands of atoms) and long time (few nanoseconds) simulation of the long-range ionic diffusion in non-crystalline materials. Based on the developed approach, we propose and confirm a design concept for a fast proton-conducting coordination polymer based on Zn2+ ions and phosphoric acid.
Materials of engineering and construction. Mechanics of materials
To investigate the stress-strain relationship of fiber reinforced phosphogypsum (PG) under uniaxial compression, a total of twenty-seven PG prism specimens were fabricated and tested. The influences of the content of admixture, content of fiber, and water-solid ratio on the stress-strain curve of the specimens were investigated. Three kinds of failure modes were summarized by observing the experimental phenomena; they were called “compaction failure,” “tension failure,” and “mixed stress failure,” respectively. Through in-depth analysis of the test data, it was found that decreasing the water-solid ratio can lead to increasing the peak stress and secant modulus of specimens; increasing the fiber content can lead to improving the mechanical property of PG mixture specimens. However, adjusting the contents of the cement and quicklime has no significant effect on the mechanical property of specimens. In addition, according to the test data and the characteristics of stress-strain curves, the stress-strain curve of PG specimens was divided into four parts, and a mathematical model was developed to predict the stress-strain curve of PG specimens. The validations of the model showed that the curves calculated by the proposed model were well in agreement with the test data of this study and previous studies.
Materials of engineering and construction. Mechanics of materials
The paper presents a new efficient and robust method for rare event probability estimation for computational models of an engineering product or a process returning categorical information only, for example, either success or failure. For such models, most of the methods designed for the estimation of failure probability, which use the numerical value of the outcome to compute gradients or to estimate the proximity to the failure surface, cannot be applied. Even if the performance function provides more than just binary output, the state of the system may be a non-smooth or even a discontinuous function defined in the domain of continuous input variables. In these cases, the classical gradient-based methods usually fail. We propose a simple yet efficient algorithm, which performs a sequential adaptive selection of points from the input domain of random variables to extend and refine a simple distance-based surrogate model. Two different tasks can be accomplished at any stage of sequential sampling: (i) estimation of the failure probability, and (ii) selection of the best possible candidate for the subsequent model evaluation if further improvement is necessary. The proposed criterion for selecting the next point for model evaluation maximizes the expected probability classified by using the candidate. Therefore, the perfect balance between global exploration and local exploitation is maintained automatically. The method can estimate the probabilities of multiple failure types. Moreover, when the numerical value of model evaluation can be used to build a smooth surrogate, the algorithm can accommodate this information to increase the accuracy of the estimated probabilities. Lastly, we define a new simple yet general geometrical measure of the global sensitivity of the rare-event probability to individual variables, which is obtained as a by-product of the proposed algorithm.
This paper develops energy management (EM) control for series hybrid electric vehicles (HEVs) that include an engine start-stop system (SSS). The objective of the control is to optimally split the energy between the sources of the powertrain and achieve fuel consumption minimization. In contrast to existing works, a fuel penalty is used to characterize more realistically SSS engine restarts, to enable more realistic design and testing of control algorithms. The paper first derives two important analytic results: a) analytic EM optimal solutions of fundamental and commonly used series HEV frameworks, and b) proof of optimality of charge sustaining operation in series HEVs. It then proposes a novel heuristic control strategy, the hysteresis power threshold strategy (HPTS), by amalgamating simple and effective control rules extracted from the suite of derived analytic EM optimal solutions. The decision parameters of the control strategy are small in number and freely tunable. The overall control performance can be fully optimized for different HEV parameters and driving cycles by a systematic tuning process, while also targeting charge sustaining operation. The performance of HPTS is evaluated and benchmarked against existing methodologies, including dynamic programming (DP) and a recently proposed state-of-the-art heuristic strategy. The results show the effectiveness and robustness of the HPTS and also indicate its potential to be used as the benchmark strategy for high fidelity HEV models, where DP is no longer applicable due to computational complexity.
This paper represents preliminary work in identifying the foundation for the discipline of Software Engineering and discovering the links between the domains of Software Engineering and Information Technology (IT). Our research utilized IEEE Transactions on Software Engineering (IEEE-TSE), ACM Transactions on Software Engineering and Methodology (ACM-TOSEM), Automated Software Engineering (ASE), the International Conference on Software Engineering(ICSE), and other related journal publication in the software engineering domain to address our research questions. We explored existing frameworks and described the need for software engineering as an academic discipline. We went further to clarify the distinction difference between Software Engineering and Computer Science. Through this efforts we contribute to an understanding of how evidence from IT research can be used to improve Software Engineering as a discipline.