Engineering workflows such as design optimization, simulation-based diagnosis, control tuning, and model-based systems engineering (MBSE) are iterative, constraint-driven, and shaped by prior decisions. Yet many AI methods still treat these activities as isolated tasks rather than as parts of a broader workflow. This paper presents Agentic Engineering Intelligence (AEI), an industrial vision framework that models engineering workflows as constrained, history-aware sequential decision processes in which AI agents support engineer-supervised interventions over engineering toolchains. AEI links an offline phase for engineering data processing and workflow-memory construction with an online phase for workflow-state estimation, retrieval, and decision support. A control-theoretic interpretation is also possible, in which engineering objectives act as reference signals, agents act as workflow controllers, and toolchains provide feedback for intervention selection. Representative automotive use cases in suspension design, reinforcement learning tuning, multimodal engineering knowledge reuse, aerodynamic exploration, and MBSE show how diverse workflows can be expressed within a common formulation. Overall, the paper positions engineering AI as a problem of process-level intelligence and outlines a practical roadmap for future empirical validation in industrial settings.
GenAI has a potential to enhance the learning and teaching processes in engineering education. For instance, GenAI feedback on students' task performance can be effective depending on when such feedback is provided. However, little is known about how engineering faculty and instructors discover such potential within the scope of their instruction when they try out the technology for the first time. To this end, this study purported to describe an engineering instructor's and seven teaching assistants' initial experiences of integrating GenAI into their undergraduate engineering course and the corresponding changes in students' formative exercise performance. An embedded descriptive single case study design was employed. The corresponding research data included four interviews conducted at the beginning, middle and end of an academic semester, and students' formative exercise performance. Overall, after GenAI integration, students' formative exercise performance increased, and a critical and reflective practice of learning about how to integrate GenAI into instruction provided informative insights. Still, technology integration stayed at the level of replacing other instructional methods or increasing the efficiency of solving coding problems. It turned out to be exciting and surprising for students to be able to use GenAI in course work even though their use of the technology weakened over time. Our findings suggest that engineering teaching staff's initial experimental experiences with GenAI integration can be informative and provide context-specific practical insights. Therefore, it is reasonable for higher education institutions to encourage such experiences especially when there is a lot of unknown regarding an emerging technology.
Renormalization group methods generate low-resolution Hamiltonians that are more diagonal and easier to solve. This chapter reviews the similarity renormalization group for nuclear Hamiltonians, which is a popular method for generating low-resolution nuclear forces. It presents the similarity renormalization group flow equations, analyzes how the similarity renormalization group drives the Hamiltonian towards the diagonal, and studies the effect of induced many-body interactions. It concludes by highlighting the progress in first-principles calculations of nuclei driven by low-resolution nuclear Hamiltonians.
The increasing use of Large Language Models (LLMs) offers significant opportunities across the engineering lifecycle, including requirements engineering, software development, process optimization, and decision support. Despite this potential, organizations face substantial challenges in assessing the risks associated with LLM use, resulting in inconsistent integration, unknown failure modes, and limited scalability. This paper introduces the LLM Risk Assessment Framework (LRF), a structured approach for evaluating the application of LLMs within Systems Engineering (SE) environments. The framework classifies LLM-based applications along two fundamental dimensions: autonomy, ranging from supportive assistance to fully automated decision making, and impact, reflecting the potential severity of incorrect or misleading model outputs on engineering processes and system elements. By combining these dimensions, the LRF enables consistent determination of corresponding risk levels across the development lifecycle. The resulting classification supports organizations in identifying appropriate validation strategies, levels of human oversight, and required countermeasures to ensure safe and transparent deployment. The framework thereby helps align the rapid evolution of AI technologies with established engineering principles of reliability, traceability, and controlled process integration. Overall, the LRF provides a basis for risk-aware adoption of LLMs in complex engineering environments and represents a first step toward standardized AI assurance practices in systems engineering.
Context: Autism spectrum disorder (ASD) leads to various issues in the everyday life of autistic individuals, often resulting in unemployment and mental health problems. To improve the inclusion of autistic adults, existing studies have highlighted the strengths these individuals possess in comparison to non-autistic individuals, e.g., high attention to detail or excellent logical reasoning skills. If fostered, these strengths could be valuable in software engineering activities, such for identifying specific kinds of bugs in code. However, existing work in SE has primarily studied the challenges of autistic individuals and possible accommodations, with little attention their strengths. Objective: Our goal is to analyse the experiences of autistic individuals in software engineering activities, such as code reviews, with a particular emphasis on strengths. Methods: This study combines Social-Technical Grounded Theory through semi-structured interviews with 16 autistic software engineers and a survey with 49 respondents, including 5 autistic participants. We compare the emerging themes with the theory by Gama et al. on the Effect of Neurodivergent Cognitive Dysfunctions in Software Engineering Performance. Results: Our results suggest that autistic software engineers are often skilled in logical thinking, attention to detail, and hyperfocus in programming; and they enjoy learning new programming languages and programming-related technologies. Confirming previous work, they tend to prefer written communication and remote work. Finally, we report a high comfort level in interacting with AI-based systems. Conclusions: Our findings extend existing work by providing further evidence on the strengths of autistic software engineers.
A driving force for the realization of a sustainable energy supply is the integration of renewable energy resources. Due to their stochastic generation behaviour, energy utilities are confronted with a more complex operation of the underlying power grids. Additionally, due to technology developments, controllable loads, integration with other energy sources, changing regulatory rules, and the market liberalization, the systems operation needs adaptation. Proper operational concepts and intelligent automation provide the basis to turn the existing power system into an intelligent entity, a cyber-physical energy system. The electric energy system is therefore moving from a single system to a system of systems. While reaping the benefits with new intelligent behaviors, it is expected that system-level developments, architectural concepts, advanced automation and control as well as the validation and testing will play a significantly larger role in realizing future solutions and technologies. The implementation and deployment of these complex systems of systems are associated with increasing engineering complexity resulting also in increased engineering costs. Proper engineering and validation approaches, concepts, and tools are partly missing until now. Therefore, this paper discusses and summarizes the main needs and requirements as well as the status quo in research and development related to the engineering and validation of cyber-physical energy systems. Also research trends and necessary future activities are outlined.
Queer students often encounter discrimination and a lack of belonging in their academic environments. This may be especially true in heteronormative male-dominated fields like software engineering, which already faces a diversity crisis. In contrast, disciplines like humanities have a higher proportion of queer students, suggesting a more diverse academic culture. While prior research has explored queer students' challenges in STEM fields, limited attention has been given to how experiences differ between the sociotechnical, yet highly heteronormative, field of software engineering and the socioculturally inclusive humanities. This study addresses that gap by comparing 165 queer software engineering and 119 queer humanities students experiences. Our findings reveal that queer students in software engineering are less likely to be open about their sexuality, report a significantly lower sense of belonging, and encounter more academic challenges compared to their peers in the humanities. Despite these challenges, queer software engineering students show greater determination to continue their studies. These insights suggest that software engineering could enhance inclusivity by adopting practices commonly seen in the humanities, such as integrating inclusive policies in classrooms, to create a more welcoming environment where queer students can thrive.
FMs, particularly LLMs, are increasingly used to support various software engineering activities (e.g., coding and testing). Their applications in the software engineering of CPSs are also growing. However, research in this area remains limited. Moreover, existing studies have primarily focused on LLMs-only one type of FM-leaving ample opportunities to explore others, such as vision-language models. We argue that, in addition to LLMs, other FMs utilizing different data modalities (e.g., images, audio) and multimodal models (which integrate multiple modalities) hold great potential for supporting CPS software engineering, given that these systems process diverse data types. To address this, we present a research roadmap for integrating FMs into various phases of CPS software engineering, highlighting key research opportunities and challenges for the software engineering community. Moreover, we discuss the common challenges associated with applying FMs in this context, including the correctness of FM-generated artifacts, as well as the inherent uncertainty and hallucination associated with FMs. This roadmap is intended for researchers and practitioners in CPS software engineering, providing future research directions using FMs in this domain.
This paper introduces novel nonparametric supervised learning techniques for classifying massive datasets, addressing key limitations of existing methods in Big and Streaming Data framework. We propose an offline kernel-based classifier enhanced by Batch Principal Component Analysis (PCA) for dimensionality reduction to mitigate the “curse of dimensionality”. Additionally, an online classifier is developed for streaming data, combining online PCA with a kernel-based recursive classifier using a stochastic approximation algorithm. Application to fetal well-being monitoring demonstrates that the online classifier achieves a competitive median misclassification rate (11.92%), comparable to the offline classifier (11.54%) and Random Forest (11.31%), while requiring only 1/15th of the offline classifier’s computation time. Receiver Operating Characteristic (ROC) analysis shows superior Area Under the Curve (AUC) for the offline classifier but at a significant computational cost. A second study on larger database of credit scoring confirms these findings, showing that the online classifier achieves an F1-score of 96.40% and an accuracy of 93.08%, closely matching the performance of neural networks (96.46%, 93.22%) and boosting (96.51%, 93.31%). Notably, the online classifier accomplishes this with a CPU time of only 0.87 seconds per classification - over 600 times faster than neural networks - demonstrating its effectiveness for high-frequency, real-time financial decision-making.
Omar Salmi, Fabio Pastori, Marco Marinsalta
et al.
Powertan is a disrupting new approach for leather tanning were the penetration inside the hide to be tanned is enhanced by the use of an externally applied electric field. Thus, the penetration is no longer controlled by the fickian diffusion mechanism by ion migration.The result is a dramatical decrease of the process time, from the almost 24 h of the traditional drum operation to few minutes. Moreover, the electric field reduces the necessity of ancillary operations like the pickling and the basification with a reduction to about one tenth of the bath/leather ratio from the about 20 L/kg. Here, the first small scale batch tests will be presented with a preliminar modeling interpretation.
Chemical engineering, Computer engineering. Computer hardware
Daniela Popescul, Lily Murariu, Laura-Diana Radu
et al.
Utilizing readily accessible information and communication technologies (ICTs), such as mobile devices, applications, and simple Internet of Things (IoT) sensors, and harnessing their potential through Experimentation as a Service (EaaS), crowdsensing, and gamification, represents one of the most effective approaches to implementing co-creation in smart cities. The benefits of this bottom-up approach are closely related to accurately identifying the real needs of city residents and increasing the chances of designing and implementing solutions with genuine impact, ensuring equity, social inclusion, sustainability, and community resilience. This paper investigates the utilization of ICTs to support social sustainability by analyzing 157 smart city projects funded under the Horizon 2020 program at the European Union level and 5 smart city projects from Canada. The results reveal the utilization of technological solutions such as testbeds, living labs, EaaS, crowdsensing, open data, and more for co-creation in smart city projects. In the discussion part, we point out the importance of focusing on technologies that are familiar to the beneficiaries and on leveraging resources already available as wearable devices or in the citizens’ homes, the versatility of the technological solutions analyzed, the role of heterogeneous and open data, and cross-disciplinary teams in creating new perspectives on urban problems, reducing inequity in the development of solutions to solve them. The concerns raised and problems reported relate to the technology itself (errors in operation), users (difficulties in stimulating their involvement and keeping it constant), and data (quality of data collected, difficult to process, ethics and security of data collection and use). Based on our results, we extract, synthetize and present six distinct categories of lessons learned by the implementation teams of the analyzed projects.
Vinicius Soares Silva Marques, Laurence Rodrigues do Amaral
Documentation is one of the most neglected activities in Software Engineering, although it is an important method of assuring quality and understanding. Bioinformatics software is generally written by researchers from fields other than Computer Science who usually do not provide documentation. Documenting bioinformatics software may ease its adoption in multidisciplinary teams and expand its impact on the community. In this paper, we highlight how one can document software that is already finished, using reverse engineering and thinking of the end-user.
In the ever-evolving realm of cybersecurity, the rise of generative AI models like ChatGPT, FraudGPT, and WormGPT has introduced both innovative solutions and unprecedented challenges. This research delves into the multifaceted applications of generative AI in social engineering attacks, offering insights into the evolving threat landscape using the blog mining technique. Generative AI models have revolutionized the field of cyberattacks, empowering malicious actors to craft convincing and personalized phishing lures, manipulate public opinion through deepfakes, and exploit human cognitive biases. These models, ChatGPT, FraudGPT, and WormGPT, have augmented existing threats and ushered in new dimensions of risk. From phishing campaigns that mimic trusted organizations to deepfake technology impersonating authoritative figures, we explore how generative AI amplifies the arsenal of cybercriminals. Furthermore, we shed light on the vulnerabilities that AI-driven social engineering exploits, including psychological manipulation, targeted phishing, and the crisis of authenticity. To counter these threats, we outline a range of strategies, including traditional security measures, AI-powered security solutions, and collaborative approaches in cybersecurity. We emphasize the importance of staying vigilant, fostering awareness, and strengthening regulations in the battle against AI-enhanced social engineering attacks. In an environment characterized by the rapid evolution of AI models and a lack of training data, defending against generative AI threats requires constant adaptation and the collective efforts of individuals, organizations, and governments. This research seeks to provide a comprehensive understanding of the dynamic interplay between generative AI and social engineering attacks, equipping stakeholders with the knowledge to navigate this intricate cybersecurity landscape.
Cosmina-Cristina Ratiu, Christoph Mayr-Dorn, Alexander Egyed
Engineering processes for safety-critical systems describe the steps and sequence that guide engineers from refining user requirements into executable code, as well as producing the artifacts, traces, and evidence that the resulting system is of high quality. Process compliance focuses on ensuring that the actual engineering work is followed as closely as possible to the described engineering processes. To this end, temporal constraints describe the ideal sequence of steps. Checking these process constraints, however, is still a daunting task that requires a lot of manual work and delivers feedback to engineers only late in the process. In this paper, we present an automated constraint checking approach that can incrementally check temporal constraints across inter-related engineering artifacts upon every artifact change thereby enabling timely feedback to engineers on process deviations. Temporal constraints are expressed in the Object Constraint Language (OCL) extended with operators from Linear Temporal Logic (LTL). We demonstrate the ability of our approach to support a wide range of higher level temporal patterns. We further show that for constraints in an industry-derived use case, the average evaluation time for a single constraint takes around 0.2 milliseconds.
Ultra-nanocrystalline diamond (UNCD) films were prepared by microwave plasma chemical vapour deposition (MPCVD) at different temperature conditions by adjusting the microwave power. The effects of the activation power of the reaction source and effects of the temperature of the substrate on the growth and composition of the UNCD films were compared and analysed in order to obtain the technique to rapidly grow high-quality UNCD films. SEM, XRD and Raman methods were used to characterise the morphological structure, phase composition and growth rate of the UNCD films, while OES spectroscopy was used to monitor the state of the growth groups during the deposition of the UNCD films. The results showed that the deposition temperature of the UNCD films ranged from 450 to 650 ℃; that the peak intensity of CN and C2 groups in the OES spectra increased with the increase of power and substrate temperature; that the growth rate increased from 0.82 μm/h to 6.62 μm/h; and that the grain size in the films increased. The average grain size was less than 10.00 nm, and the surface was flatter and smoother, forming a surface profile more favourable to the mechanical properties. Therefore, the use of diisopropylamine liquid small molecules as the reaction source, together with the application of higher microwave power and deposition at higher substrate temperatures, is an effective way to mushroom high-quality UNCD films.
Materials of engineering and construction. Mechanics of materials, Mechanical engineering and machinery
Emad Shihab, Stefan Wagner, Marco A. Gerosa
et al.
We are witnessing a massive adoption of software engineering bots, applications that react to events triggered by tools and messages posted by users and run automated tasks in response, in a variety of domains. This thematic issues describes experiences and challenges with these bots.
The COVID-19 pandemic has represented a challenge for higher education in terms to provide quality education despite the lockdown periods, the transformation of the in-person classes to virtual classes, and the demotivation and anxiety that are experimented by the students. Because the basis of engineering is the experimentation through hands-on activities and learning by doing, the lockdown periods and the temporary suspension of the in-person classes and laboratories have meant a problem for educators that try to teach and motivate the students despite the situation. In this context, this study presents an educational methodology based on Problem-Based Learning (PBL) and in-home laboratories in engineering. The methodology was carried out in two phases during 2020, in the academic programs of Industrial Engineering and Technology in Electronics with (n=44) students. The in-home laboratories were sent to the students as part of "kits" with the devices needed in each subject. Besides, due to the difficulties in monitoring the learning process, the students made videos and blogs as a strategy to reinforce their learning and evidence the progress in the courses. The outcomes of the methodology show mainly the following points: (1) An improvement of the academic performance and learning of the students in the courses. (2) A positive influence of the usage of in-home laboratories in motivation, self-efficacy, and reduction of anxiety. (3) Positive correlations between the usage of in-home laboratories, the blogs and videos, and the teacher's feedback for learning, motivation, and self-efficacy. Thus, these results evidence that other alternatives that gather the cognitive and affective learning domains can emerge from engineering to deal with the educational problems produced by the crisis periods.