Empathy in Software Engineering Education: Evidence, Practices, and Opportunities
Matheus de Morais Leca, Kim Johnston, Ronnie de Souza Santos
\textbf{Context:} Empathy is increasingly recognized as a critical human capability for software engineers, supporting collaboration, ethical awareness, and user-centered design. While many disciplines have long explored empathy as part of professional formation, its incorporation into software engineering education remains fragmented. \textbf{Aim:} This study investigates how empathy has been used, taught, and discussed in general engineering and software engineering education, with the goal of identifying pedagogical practices, outcomes, and disciplinary differences that inform the structured integration of empathy into software curricula. \textbf{Method:} Following established guidelines for systematic reviews in software engineering, we conducted a comprehensive search across six databases and analyzed 43 primary studies published between 2001 and 2025. Data were coded and synthesized using descriptive and thematic analysis to capture how empathy is conceptualized, fostered, and assessed across educational contexts. \textbf{Findings:} Our findings show that engineering programs frame empathy as an ethical and reflective capacity linked to social responsibility, whereas software engineering translates empathy into structured, design-oriented, and measurable practices. Across both domains, empathy teaching enhances collaboration, ethical reasoning, bias awareness, and motivation, but remains limited by low curricular prioritization, measurement challenges, and resource constraints. \textbf{Conclusion:} Empathy is evolving from a peripheral soft skill into a measurable pedagogical construct in software engineering education. Embedding empathy as a continuous, assessable component of design and development courses can strengthen inclusivity, ethical reflection, and responsible innovation in future software professionals.
Critical node identification and resilience analysis against cascading failures.
Anqi Liu, Wenfu Zhao
Ensuring the robustness and resilience of critical infrastructure networks such as transportation and energy systems is a core security challenge for modern societies. Vulnerabilities in these networks often concentrate on a small number of critical nodes, whose failure can trigger catastrophic cascading failures. Therefore, accurately identifying critical nodes and formulating effective reinforcement strategies are crucial for enhancing the overall defense capability of the system. Existing graph neural network (GNN)-based methods often rely on topological centrality metrics, neglecting the distribution of node information and the impacts of cascading failures. To bridge this gap, this study constructs a comprehensive analytical framework (TEC-GNN, Topology-Entropy-Cascading Graph Neural Network) integrating graph neural networks, feature engineering, and resilience assessment. It aims to address two core questions: which graph neural network model is most suitable for critical node identification, and how to enhance network resilience by regulating redundant resource allocation. Systematic evaluation indicates that the GraphSAGE model delivers the best overall performance in critical node identification. Its results exhibit high consistency with supervised signals (Spearman's correlation coefficient of 0.822), achieving a Normalized Discounted Cumulative Gain at Top-K (NDCG@K) of 0.918, an F1 Score at Top-K (F1@K) of 0.879, and a Top-K accuracy of 0.879. Its inference efficiency (0.002 s) is comparable to GCN and significantly outperforms GAT, meeting the demands of real-time analysis for large-scale networks. After feature dimension reduction via principal component analysis (PCA), the model's discriminative power further improved, with effect size (Cohen's d) increasing by approximately 4% without efficiency loss, validating the effectiveness of scientific dimension reduction. The model's accuracy was robustly validated through attack experiments: selectively removing the top 10% critical nodes identified by GraphSAGE reduced the network's largest connected component ratio (LCC_Ratio) to approximately 0.4, severely impairing network functionality. When the removal rate reached 20% (equivalent to 60% removal in random attacks), the network became nearly paralyzed. Another core finding reveals the complex nonlinear regulatory mechanism of redundancy coefficient β on network resilience. The resilience metric R exhibits clear diminishing marginal returns with increasing β: R rises rapidly as β increases from 0 to 0.5, then slows significantly with fluctuations thereafter. Based on this, the study proposes a "precision reinforcement" strategy: enhancing redundancy allocation only for critical nodes identified by GraphSAGE enables low-cost resilience improvement (e.g., R increases from 0.874 to 0.883). This strategy provides an efficient path for system fortification under resource constraints. The research framework proposed in this paper provides interpretable and scalable theoretical and methodological support for vulnerability assessment and resilience enhancement in critical infrastructure. The validated GraphSAGE model and "targeted reinforcement" strategy are particularly suitable for risk prevention and resource optimization in major infrastructure systems requiring dynamic analysis and rapid response, such as transportation and power grids.
A Comparison of Human Capabilities and Large Language Models for Knowledge Representation with Ontologies of Non-Destructive Testing in Bridge Engineering
Jan-Iwo Jäkel, Eva Heinlein, Joy Sengupta
et al.
Bridge structures are considered complex and significant. Accordingly, the knowledge of the engineering domain of bridge construction and related specialist areas is multidimensional and highly specific. Sometimes this knowledge is explicitly documented in standards, technical regulations, or information sheets. At other times, it resides implicitly in the expertise of the specialists involved. Ontologies are used to structure and formalize such domain knowledge, but creating them is resource-intensive and requires specialized expertise. Large language models (LLMs) offer one way to automate ontology creation through their natural language processing capabilities. This article examines LLMs’ ability to generate ontologies in the specialized field of structural non-destructive testing (NDT) in bridge construction. Four different LLM-based approaches are employed. The results are compared with a previously created human-generated ontology and subsequently evaluated by external experts. Experts rate the human-developed SODIA ontology highest, with an average score of 3.44 out of 5 points. Only the ChatGPT 4.0-created ontology performed similarly well, with a score of 3.3 out of 5.00. All other LLM-based ontologies with ratings below 3.0 are of minor quality. These results underscore the potential and constraints of using LLMs to structure and formalize engineering domain knowledge into ontologies.
TOM-SWE: User Mental Modeling For Software Engineering Agents
Xuhui Zhou, Valerie Chen, Zora Zhiruo Wang
et al.
Recent advances in coding agents have made them capable of planning, editing, running, and testing complex code bases. Despite their growing ability in coding tasks, these systems still struggle to infer and track user intent, especially when instructions are underspecified or context-dependent. To bridge this gap, we introduce ToM-SWE, a dual-agent architecture that pairs a primary software-engineering (SWE) agent with a lightweight theory-of-mind (ToM) partner agent dedicated to modeling the user's mental state. The ToM agent infers user goals, constraints, and preferences from instructions and interaction history, maintains a \textbf{persistent memory} of the user, and provides user-related suggestions to the SWE agent. In two software engineering benchmarks (ambiguous SWE-bench and stateful SWE-bench), ToM-SWE improves task success rates and user satisfaction. Notably, on the stateful SWE benchmark, a newly introduced evaluation that provides agents with a user simulator along with previous interaction histories, ToM-SWE achieves a substantially higher task success rate of 59.7\% compared to 18.1\% for OpenHands, a state-of-the-art SWE agent. Furthermore, in a three-week study with professional developers using ToM-SWE in their daily work, participants found it useful 86\% of the time, underscoring the value of stateful user modeling for practical coding agents.
A Conceptual Framework for Requirements Engineering of Pretrained-Model-Enabled Systems
Dongming Jin, Zhi Jin, Linyu Li
et al.
Recent advances in large pretrained models have led to their widespread integration as core components in modern software systems. The trend is expected to continue in the foreseeable future. Unlike traditional software systems governed by deterministic logic, systems powered by pretrained models exhibit distinctive and emergent characteristics, such as ambiguous capability boundaries, context-dependent behavior, and continuous evolution. These properties fundamentally challenge long-standing assumptions in requirements engineering, including functional decomposability and behavioral predictability. This paper investigates this problem and advocates for a rethinking of existing requirements engineering methodologies. We propose a conceptual framework tailored to requirements engineering of pretrained-model-enabled software systems and outline several promising research directions within this framework. This vision helps provide a guide for researchers and practitioners to tackle the emerging challenges in requirements engineering of pretrained-model-enabled systems.
Machine learning prediction and explainability analysis of high strength glass powder concrete using SHAP PDP and ICE
Muhammad Sarmad Mahmood, Tariq Ali, Inamullah Inam
et al.
Abstract Achieving high-strength concrete (HSC) with sustainable supplementary cementitious materials (SCMs) remains a significant challenge in the construction industry. Although glass powder has shown promise as a partial cement substitute, its specific impact on HSC growth is still unclear. This study aims to evaluate the compressive strength (CS) of high strength glass-powder concrete (HSGPC) using machine learning (ML) models and enhance predictive accuracy through hybrid optimization techniques. A dataset comprising 598 points was compiled, considering cement, glass powder, aggregates, water, superplasticizer, and curing days as key input parameters. Three standalone ML models—K-Nearest Neighbors (KNN), Random Forest (RF), and Extreme Gradient Boosting (XGB)—were trained, with RF achieving R² = 0.963 and XGB achieving R² = 0.946 on the test set. To further enhance performance, XGB was optimized using Particle Swarm Optimization (PSO), Firefly Algorithm (FA), and Grey Wolf Optimizer (GWO). Among these, XGB-GWO demonstrated the highest accuracy, with R² improving to 0.991 and MSE decreasing significantly from 83.95 to 14.42, resulting in an 82.82% error reduction. SHAP, PDP, and ICE analyses identified superplasticizer dosage, curing days, and coarse aggregate as the most influential parameters affecting compressive strength (CS). PDP and ICE validated these findings, showing reduced strength gains beyond 600 kg/m³ of cement and a decline beyond 800 kg/m³ of coarse aggregate. This study highlights the potential of ML-driven optimization for sustainable concrete design, offering an efficient, data-driven approach to optimizing material proportions for high-strength, eco-friendly concrete.
Finite Element Computations on Mobile Devices: Optimization and Numerical Efficiency
Maya Saade, Rafic Younes, Pascal Lafon
Smartphones have become increasingly powerful and widespread, enabling complex numerical computations that were once limited to desktop systems. However, implementing high-precision Finite Element Analysis (FEA) on mobile devices remains challenging due to constraints in memory, processing speed, and energy efficiency. This paper presents an optimized algorithmic framework for performing FEA on mobile platforms, focusing on the adaptation of meshing and iterative solver strategies to resource-limited environments. Several iterative solvers for large sparse linear systems are compared, and predefined refined meshing techniques are implemented to balance computational cost and accuracy. A two-dimensional bridge model is used to validate the proposed methods and demonstrate their numerical stability and computational efficiency on smartphones. The results confirm the feasibility of executing reliable FEA directly on mobile hardware, highlighting the potential of portable, low-cost devices as platforms for computational mechanics and algorithmic simulation in engineering and education.
Industrial engineering. Management engineering, Electronic computers. Computer science
Bridge Positions and Plat Presentations of Links
Seth Hovland
In this paper we investigate the relationship between links in bridge position and plat presentations. We will show that the Hilden double coset classes of plat presentations of a link are equivalent to bridge positions of the link up to bridge isotopy. This correspondence allows us to reframe algebraic questions about plat presentations in terms of bridge positions. We demonstrate some results about both plat presentations and links in bridge position using this correspondence. For instance, we reprove that there is only one Hilden double coset class of the n-bridge unknot in $\mathbb{S}^3.$ We also show that there is only a single double coset class for torus knots in plat position. Finally, we discuss how this correspondence may be used to investigate plat closures of knots, which is the subject of ongoing research.
An Architecture for Software Engineering Gamification
Óscar Pedreira, Félix García, Mario Piattini
et al.
Gamification has been applied in software engineering to improve quality and results by increasing people's motivation and engagement. A systematic mapping has identified research gaps in the field, one of them being the difficulty of creating an integrated gamified environment comprising all the tools of an organization, since most existing gamified tools are custom developments or prototypes. In this paper, we propose a gamification software architecture that allows us to transform the work environment of a software organization into an integrated gamified environment, i.e., the organization can maintain its tools, and the rewards obtained by the users for their actions in different tools will mount up. We developed a gamification engine based on our proposal, and we carried out a case study in which we applied it in a real software development company. The case study shows that the gamification engine has allowed the company to create a gamified workplace by integrating custom developed tools and off-the-shelf tools such as Redmine, TestLink, or JUnit, with the gamification engine. Two main advantages can be highlighted: (i) our solution allows the organization to maintain its current tools, and (ii) the rewards for actions in any tool accumulate in a centralized gamified environment.
Hidden Populations in Software Engineering: Challenges, Lessons Learned, and Opportunities
Ronnie de Souza Santos, Kiev Gama
The growing emphasis on studying equity, diversity, and inclusion within software engineering has amplified the need to explore hidden populations within this field. Exploring hidden populations becomes important to obtain invaluable insights into the experiences, challenges, and perspectives of underrepresented groups in software engineering and, therefore, devise strategies to make the software industry more diverse. However, studying these hidden populations presents multifaceted challenges, including the complexities associated with identifying and engaging participants due to their marginalized status. In this paper, we discuss our experiences and lessons learned while conducting multiple studies involving hidden populations in software engineering. We emphasize the importance of recognizing and addressing these challenges within the software engineering research community to foster a more inclusive and comprehensive understanding of diverse populations of software professionals.
With Great Power Comes Great Responsibility: The Role of Software Engineers
Stefanie Betz, Birgit Penzenstadler
The landscape of software engineering is evolving rapidly amidst the digital transformation and the ascendancy of AI, leading to profound shifts in the role and responsibilities of software engineers. This evolution encompasses both immediate changes, such as the adoption of Language Model-based approaches in coding, and deeper shifts driven by the profound societal and environmental impacts of technology. Despite the urgency, there persists a lag in adapting to these evolving roles. By fostering ongoing discourse and reflection on Software Engineers role and responsibilities, this vision paper seeks to cultivate a new generation of software engineers equipped to navigate the complexities and ethical considerations inherent in their evolving profession.
A refined empirical model of steady-state downburst based on high-resolution wind speed data obtained from wind tunnel tests
Bowen Yan, Ying Peng, Xu Cheng
et al.
A slice of research pertinent to the analytical or empirical models of downburst winds was available, but the current validation was usually based on sampled wind velocities at several typical positions in the experimental measurements rather than the whole-flow domain information. Therefore, the capability of existing analytical or empirical models to reproduce the whole development of downbursts at different stages remains questionable. This paper aims to refine the empirical model of downbursts based on high-resolution wind speed data obtained from wind tunnel tests, which renders the well-established empirical model to simulate the maximum wind speed of the vortex ring and to obtain the more accurate wind profiles in the outflow regime. To overcome the bottleneck problem that lay in the lack of experimental results for whole-flow domain information, a comprehensive experimental study based on the impinging jet simulator was conducted and the spatial variations of downburst winds were systematically measured. Then, the empirical model considering the nonlinear growth of boundary layer thickness is refined according to the experimental results, so that two new empirical functions, which could rationally represent the spatial variation of horizontal wind velocities in the whole-flow domain, have been proposed. The refined empirical model can be used to facilitate the safety analysis of structures and buildings under downburst winds with a higher confidence level.
Engineering (General). Civil engineering (General)
Analytical Study of Stud Shear Connector Behavior in Steel–UHPC Composite Structures
Wei Du, Zhijian Hu, Zhi Zhou
Ultra-high performance concrete (UHPC) combined with shorter stud shear connectors (<i>h/d</i> < 4) presents challenges that existing analytical models for stud connectors cannot adequately address. This study enhances the elastic foundation beam model to better accommodate these material and dimensional changes. Key improvements include the analytical calculation of equivalent foundation stiffness, which incorporates the rotation of the stud head—an aspect often neglected in previous research—and considers the post-yield plastic hinge at the stud weld. The proposed analytical model effectively captures variations in stud diameter and concrete elastic modulus, providing a load–slip curve with broader applicability than traditional empirical formulas. Validation against experimental data from 21 push-out specimens of varying diameters shows strong agreement, confirming the accuracy of the method. Moreover, a parametric study based on the analytical model reveals the sequential relationship between the formation of plastic hinges at the stud weld and the development of plastic regions in the concrete. This relationship is influenced by factors such as stud diameter, yield strength, and concrete strength. Notably, an increase in concrete strength significantly enhances the shear force at the stud root at the point when the concrete reaches its compressive strength. This explains why high-strength concrete specimens exhibit lower ultimate slip. These findings provide a crucial basis for understanding the behavior of stud shear connectors in composite structures.
Towards an Understanding of Large Language Models in Software Engineering Tasks
Zibin Zheng, Kaiwen Ning, Qingyuan Zhong
et al.
Large Language Models (LLMs) have drawn widespread attention and research due to their astounding performance in text generation and reasoning tasks. Derivative products, like ChatGPT, have been extensively deployed and highly sought after. Meanwhile, the evaluation and optimization of LLMs in software engineering tasks, such as code generation, have become a research focus. However, there is still a lack of systematic research on applying and evaluating LLMs in software engineering. Therefore, this paper comprehensively investigate and collate the research and products combining LLMs with software engineering, aiming to answer two questions: (1) What are the current integrations of LLMs with software engineering? (2) Can LLMs effectively handle software engineering tasks? To find the answers, we have collected related literature as extensively as possible from seven mainstream databases and selected 123 timely papers published starting from 2022 for analysis. We have categorized these papers in detail and reviewed the current research status of LLMs from the perspective of seven major software engineering tasks, hoping this will help researchers better grasp the research trends and address the issues when applying LLMs. Meanwhile, we have also organized and presented papers with evaluation content to reveal the performance and effectiveness of LLMs in various software engineering tasks, guiding researchers and developers to optimize.
Emotions in Requirements Engineering: A Systematic Mapping Study
Tahira Iqbal, Hina Anwar, Syazwanie Filzah
et al.
The purpose of requirements engineering (RE) is to make sure that the expectations and needs of the stakeholders of a software system are met. Emotional needs can be captured as emotional requirements that represent how the end user should feel when using the system. Differently from functional and quality (non-functional) requirements, emotional requirements have received relatively less attention from the RE community. This study is motivated by the need to explore and map the literature on emotional requirements. The study applies the systematic mapping study technique for surveying and analyzing the available literature to identify the most relevant publications on emotional requirements. We identified 34 publications that address a wide spectrum of practices concerned with engineering emotional requirements. The identified publications were analyzed with respect to the application domains, instruments used for eliciting and artefacts used for representing emotional requirements, and the state of the practice in emotion-related requirements engineering. This analysis serves to identify research gaps and research directions in engineering emotional requirements. To the best of the knowledge by the authors, no other similar study has been conducted on emotional requirements.
Tunneling construction technology of shafts and cross-passages under strictly controlling deformation of the existing railway
Liu Liu, Liu Liu, Gongwen Xu
et al.
Underground construction will have more or less adverse effects on adjacent existing buildings with more and more existing buildings above ground. However, this situation has only been reported by a small number of researchers. In view of this, this article takes the existing airport line shaft and horizontal passage project in the western suburb of Beijing Metro Line 12 as the background to study the impact of the construction of subway station and shaft passage on the adjacent existing railway. Based on the above project reality, under the action of pavement load, the effects of different parameters (the distance between the surface measuring point and the middle line of the transverse passage and the substep of construction loading sup step) on the surface settlement and track deformation of the shaft and cross-passage through the existing railway are studied by numerical analysis method. The calculation results show that the construction method of shaft and cross passage is reasonable. The comprehensive reinforcement measures of subgrade, rail and hole are effective, effectively controlling the deformation of subgrade and rail within the standard value (surface settlement ≤60 mm, rail deformation ≤6 mm). In addition, the numerical simulation data can better represent the actual situation as a whole.
Effect of Particle Shape on The Behavior of Polymer-Improved Sandy Soil Used in Pavements Due to Freeze-Thaw Cycles
Babak Karimi
Freeze-thaw cycles have a significant negative effect on the engineering behaviour of soil in cold regions. In this study, the compressive strength of stabilized, poorly graded sandy soil used in road pavement that was subjected to different freeze-thaw cycles was studied. Samples with three different particle shapes were stabilized with a binder developed by mixing polyvinyl acetate (PVAc) and ethylene glycol monobutyl ether (EGBE). The PVAc/EGBE weight ratio was 2:1, and PVAc was added at 1%, 2%, and 3% of the dry weight of the soil, with the effect of up to ten freeze-thaw cycles evaluated. Results showed that the addition of binder decreased optimum moisture content and increased compressive strength. An increase in particle roundness results in a decrease in the magnitude of compressive strength but increases the soil composite ductility. Changing particle shape from angular to rounded resulted in a more significant decrease in compressive strength than changing from rounded to well-rounded. The decrease in compressive strength is most significant between the first and fourth freezing-thawing cycles and marginal between the fourth and tenth. The negative effect of increasing the roundness of particles is compensated by increasing binder percentages.
Highway engineering. Roads and pavements, Bridge engineering
A Nonlinear Three-Dimensional Finite Element Analysis of Stress Distribution and Microstrain Evaluation in Short Dental Implants with Three Different Implant–Abutment Connections in Single and Splinted Conditions in the Posterior Mandible
Karishma S. Talreja, Shobha J. Rodrigues, Umesh Y. Pai
et al.
Background. Stress distribution plays a vital role in the longevity and success of implant-supported prosthesis. This study evaluated the von Mises stress and microstrain in the peri-implant bone and the implant–abutment junction of short dental implants with three different implant–abutment connections in splinted and unsplinted conditions using finite element analysis (FEA). Materials and Methods. In this experimental study, nine transversely isotropic finite element models were developed, and randomly divided into three equal groups (n = 3): control, (Group AC) single-standard 4.3 × 10 mm bone level implant-supported restorations with external hexagonal (EH) connection, internal conical (IC) and internal trichannel (ITC) connection, single short implant-supported restorations (Group AT), and splinted short implant-supported restorations (Group B) for each of the three implant–abutment connections, respectively. A 200 N load was applied along the long axis of the implants and a 100 N (45°) oblique load was applied and von Mises stress and microstrain values were evaluated. Results. Single standard implants demonstrated the highest von Mises stress and microstrain values followed by single short implants and splinted short implants, respectively. Among the implant–abutment connections, the IC connection showed the highest values and the ITC connection showed the least values. Conclusion. Within the limitations of this study, it was concluded that splinting of short dental implants demonstrated lesser and more homogeneous stress and microstrain, especially on oblique loading. The microstrain values for all connections evaluated were within the physiological loading limit (200–2,500 N) and were hence considered safe for clinical use.
Game Engine Comparative Anatomy
Gabriel C. Ullmann, Cristiano Politowski, Yann-Gaël Guéhéneuc
et al.
Video game developers use game engines as a tool to manage complex aspects of game development. While engines play a big role in the success of games, to the best of our knowledge, they are often developed in isolation, in a closed-source manner, without architectural discussions, comparison, and collaboration among projects. In this work in progress, we compare the call graphs of two open-source engines: Godot 3.4.4 and Urho3D 1.8. While static analysis tools could provide us with a general picture without precise call graph paths, the use of a profiler such as Callgrind allows us to also view the call order and frequency. These graphs give us insight into the engines' designs. We showed that, by using Callgrind, we can obtain a high-level view of an engine's architecture, which can be used to understand it. In future work, we intend to apply both dynamic and static analysis to other open-source engines to understand architectural patterns and their impact on aspects such as performance and maintenance.
Experimental Study on Bearing Capacity of Corroded Reinforced Concrete Arch Considering Material Degradation
Jingzhou Xin, Jingzhou Xin, Jieyun Wang
et al.
To study the bearing capacity of a corroded reinforced concrete (RC) arch and analyze the deterioration mechanism of an in-service RC arch bridge, a deterioration simulation under the coupling effect of the environment and load was performed by employing non-immersion energization, and considering the dead load on the arch, single point loading tests of the arch models were carried out; the crack development, structural deformation, and ultimate bearing capacity of a corroded RC arch under service stress were studied; the failure mode of the corroded arch was explored; and a bearing capacity prediction model considering dual deterioration effects of reinforcement corrosion deterioration and arch axis deterioration was established. Results indicated that the spacing of cracks caused by a load on the non-corroded arch was more uniform, and the number and distribution range of load-induced cracks in the corroded arch was smaller, while the maximum crack width was larger. Corrosion significantly reduced the strength of the arch rib; for the deteriorated arch with a corrosion rate of 7.62%, the cracking load and the bearing capacity decreased by 28.57 and 9.84%, respectively. Corrosion weakened structural stiffness, while it does not convert the failure mode of the arch. Only considering section resistance degradation may underestimate the damaging effects of corrosion on the arch structure.