This paper investigates how software professionals perceive the economic implications of diversity in software engineering teams. Motivated by a gap in software engineering research, which has largely emphasized socio-technical and process-related outcomes, we adopted a qualitative interview approach to capture practitioners' reasoning about diversity in relation to economic and market-oriented considerations. Based on interviews with ten software professionals, our analysis indicates that diversity is perceived as economically relevant through its associations with cost reduction and containment, revenue generation, time to market, process efficiency, innovation, and market alignment. Participants typically grounded these perceptions in concrete project experiences rather than abstract economic reasoning, framing diversity as a practical resource that supports project delivery, competitiveness, and organizational viability. Our findings provide preliminary empirical insights into how economic aspects of diversity are understood in software engineering practice.
Industrial robots are extensively utilized in machinery manufacturing owing to their multiple degrees of freedom (DoF) and inherent flexibility. However, backlash not only impairs the bidirectional repeatability and multi-directional repeatability of the end-effector but also renders its identification and compensation more challenging in comparison to those associated with geometric errors. Existing backlash identification and compensation methods overlook the principle of backlash error transmission, a limitation that severely hinders improvements in both the bidirectional repeatability and multi-directional repeatability of the robot end-effector. In contrast to existing studies, this article presents the first systematic discussion on the correlation between backlash and both bidirectional repeatability and multi-directional repeatability, and proposes a pipeline including modeling, identification and compensation methods. Based on the principle of backlash error transmission, a mathematical model incorporating the reduction ratio correction coefficient, backlash, and joint rotation direction is developed and integrated into a robotic kinematic error model that includes higher-order joint-dependent error terms. Given that the pose deviation between adjacent coordinate systems is a linear function incorporating backlash and kinematic errors, a Taylor expansion is performed on this function with higher-order error terms omitted, and the iterative reweighted least squares (IRLS) algorithm is adopted to identify both backlash and kinematic parameter errors. A compensation method that accounts for both backlash and the compensation direction is also proposed, in which the compensation direction is determined by the joint velocity prior to the joint reaching the target position. Experiments were conducted to compare the single-axis rotation identification and compensation approach implemented on a 6-DoF robot with the innovative method proposed in this article. The experimental results indicate that the traditional single-axis method requires repeated single-axis movements for each joint, leading to low efficiency in backlash identification and compensation, and it fails to account for the impact of backlash on multi-directional repeatability. In contrast, the proposed method rapidly identifies backlash in each joint and improves bidirectional repeatability and multi-directional repeatability by 47.97% and 53.44%, respectively.
AI for software engineering has made remarkable progress recently, becoming a notable success within generative AI. Despite this, there are still many challenges that need to be addressed before automated software engineering reaches its full potential. It should be possible to reach high levels of automation where humans can focus on the critical decisions of what to build and how to balance difficult tradeoffs while most routine development effort is automated away. Reaching this level of automation will require substantial research and engineering efforts across academia and industry. In this paper, we aim to discuss progress towards this in a threefold manner. First, we provide a structured taxonomy of concrete tasks in AI for software engineering, emphasizing the many other tasks in software engineering beyond code generation and completion. Second, we outline several key bottlenecks that limit current approaches. Finally, we provide an opinionated list of promising research directions toward making progress on these bottlenecks, hoping to inspire future research in this rapidly maturing field.
The adoption of large language models (LLMs) and autonomous agents in software engineering marks an enduring paradigm shift. These systems create new opportunities for tool design, workflow orchestration, and empirical observation, while fundamentally reshaping the roles of developers and the artifacts they produce. Although traditional empirical methods remain central to software engineering research, the rapid evolution of AI introduces new data modalities, alters causal assumptions, and challenges foundational constructs such as "developer", "artifact", and "interaction". As humans and AI agents increasingly co-create, the boundaries between social and technical actors blur, and the reproducibility of findings becomes contingent on model updates and prompt contexts. This vision paper examines how the integration of LLMs into software engineering disrupts established research paradigms. We discuss how it transforms the phenomena we study, the methods and theories we rely on, the data we analyze, and the threats to validity that arise in dynamic AI-mediated environments. Our aim is to help the empirical software engineering community adapt its questions, instruments, and validation standards to a future in which AI systems are not merely tools, but active collaborators shaping software engineering and its study.
Joshua Owotogbe, Indika Kumara, Dario Di Nucci
et al.
Chaos engineering aims to improve the resilience of software systems by intentionally injecting faults to identify and address system weaknesses that cause outages in production environments. Although many tools for chaos engineering exist, their practical adoption is not yet explored. This study examines 971 GitHub repositories that incorporate 10 popular chaos engineering tools to identify patterns and trends in their use. The analysis reveals that Toxiproxy and Chaos Mesh are the most frequently used, showing consistent growth since 2016 and reflecting increasing adoption in cloud-native development. The release of new chaos engineering tools peaked in 2018, followed by a shift toward refinement and integration, with Chaos Mesh and LitmusChaos leading in ongoing development activity. Software development is the most frequent application (58.0%), followed by unclassified purposes (16.2%), teaching (10.3%), learning (9.9%), and research (5.7%). Development-focused repositories tend to have higher activity, particularly for Toxiproxy and Chaos Mesh, highlighting their industrial relevance. Fault injection scenarios mainly address network disruptions (40.9%) and instance termination (32.7%), while application-level faults remain underrepresented (3.0%), highlighting for future exploration.
As an essential highway safety facility, roadside W-beam guardrails effectively prevent errant vehicles from entering hazardous zones or causing secondary collisions by blocking and redirecting them, thereby reducing accident severity. With the rapid development of the automotive industry, the front bumper height of small passenger cars generally ranges between 405 mm and 485 mm. However, the lower edge height of the current Chinese Class A W-beam guardrail is 444 mm above the ground, which leads to a high risk of “underride” during collisions, resulting in elevated occupant injury risks. To address this issue, this paper proposes an optimized guardrail structure composed of a double W-beam and a C-type beam, aiming to reduce the underride risk for small passenger cars while accommodating multi-vehicle protection needs. In this design, the double W-beam is installed at a height of 560 mm and the C-type beam at 850 mm, connected to circular posts using a regular hexagonal anti-obstruction block. The beam thickness is uniformly 3 mm, while the thickness of other components is 4 mm. To systematically evaluate the impact of material strength on both safety performance and cost, two material configurations are proposed: Scheme 1 uses Q235 carbon steel for all components; Scheme 2 reduces the thickness of the C-type beam to 2.5 mm and employs Q355 high-strength low-alloy steel, with the thickness of the connected anti-obstruction block reduced to 3.5 mm, while the other components retain Q235 steel and unchanged structural dimensions. Using finite element simulation, collisions involving small passenger cars, medium trucks, and buses are simulated, and performance comparisons are conducted based on vehicle trajectory and guardrail deformation. For the small passenger car scenario, risk quantification indicators—Acceleration Severity Index (ASI), Theoretical Head Impact Velocity (THIV), and Post-impact Head Deceleration (PHD)—are introduced to assess occupant injury. The results demonstrate that Scheme 2 not only meets the required protection level but also significantly reduces occupant risk for small passenger cars, lowering the injury rating from Class C to Class B. Moreover, the overall structural mass is reduced by approximately 1407 kg per kilometer, with material costs decreased by about RMB 10,129, demonstrating favorable economic efficiency. The proposed structural optimization not only effectively mitigates small car underride and improves multi-vehicle protection performance but also provides the industry with a novel guardrail geometric design directly applicable to engineering practice. The technical approach of enhancing material strength and reducing component thickness also offers a feasible reference for lightweight design, material savings, and cost optimization of guardrail systems, contributing significantly to improving the safety and sustainability of road transportation infrastructure.
Abstract This study investigates six types of prediction methods for estimating extreme bridge traffic load effects, aiming to establish a correlation between prediction accuracy and data quality. Accurately determining the distribution functions of maximum values is crucial for assessing bridge safety under traffic loads. Methods including the Peaks Over Threshold, the block maxima approach, fitting to a Normal distribution, and the Rice formula based level crossing method, are investigated. Additionally, Bayesian Updating and Predictive Likelihood techniques, integrated with the block maxima approach, are explored. The performance of these methods is assessed using two distinct datasets. The first dataset is generated from a known distribution, allowing the estimated distribution parameters and extreme values derived from each method to be compared with the true values. The analysis is then extended to more realistic scenarios, where long-run simulations provide benchmark results for evaluating the accuracy of each method. Based on the findings, recommendations are provided for selecting the most suitable prediction method, considering factors such as sample size, time interval, and the type of load effect. This work offers practical insights for improving the reliability of extreme value prediction methods in bridge safety assessments.
Standard Operating Procedures (SOPs) serve a critical role in complex systems operations, guiding operator response during normal and emergency scenarios. This study compares 29 SOPs (517 steps) across three domains with varying operator selection rigor: airline operations, Habitable Airlock (HAL) operations, and semi-autonomous vehicles. Using the extended Procedure Representation Language (e-PRL) framework, each step was decomposed into perceptual, cognitive, and motor components, enabling quantitative analysis of step types, memory demands, and training requirements. Monte Carlo simulations compared Time on Procedure against the Allowable Operational Time Window to predict failure rates. The analysis revealed three universal vulnerabilities: verification steps missing following waiting requirements (70% in airline operations, 58% in HAL operations, and 25% in autonomous vehicle procedures), ambiguous perceptual cues (15–48% of steps), and excessive memory demands (highest in HAL procedures at 71% average recall score). Procedure failure probabilities varied significantly (5.72% to 63.47% across domains), with autonomous vehicle procedures showing the greatest variability despite minimal operator selection. Counterintuitively, Habitable Airlock procedures requiring the most selective operators had the highest memory demands, suggesting that rigorous operator selection may compensate for procedure design deficiencies. These findings establish that procedure design approaches vary by domain based on assumptions about operator capabilities rather than universal human factors principles.
With the gradual shift of high-grade highways in China from construction to maintenance, higher performance requirements have been placed on asphalt binders. High-content SBS modified asphalt has become an inevitable choice for both new pavement construction and maintenance. However, conventional high-content SBS modified asphalt suffers from high energy consumption, excessive carbon emissions, and poor construction workability. In this study, a self-developed warm-mixed additive was introduced into high-content SBS modified asphalt, and the process was optimized to obtain warm-mixed SBS modified asphalt. The effects of the additive on the asphalt performance and the warm-mix efficiency were evaluated in terms of viscosity-temperature characteristics, rheological properties, and thermal properties, while the viscosity-reduction mechanism was further revealed through microstructural analysis. The results show that when the mixing ratio of additive A to B is 2% to 1%, the warm-mixed SBS modified asphalt exhibits optimal performance. The softening point increases by 1.8 ℃; the ductility at 5 ℃ improves by 5.7 cm, and the rotational viscosity at 135 ℃ decreases by 0.9 Pa·s, thereby significantly enhancing construction workability. Rheological tests demonstrate that both high- and low-temperature performance meet the PG76-22 grade requirements. Microstructural observations confirm that no new chemical substances are generated during the viscosity-reduction process; instead, the additive functions as a lubricant in the molten state to reduce viscosity through physical action and serves as a skeleton in the solid state to reinforce the binder and improve its rheological properties.
As pivotal drivers of smart cities, mega-mobility systems integrate large-scale transportation networks, communication nodes, and energy circuits into a coupled multinetwork system. Urban megasystems epitomize the grand challenge of “organized complexity”, exhibiting characteristic features such as adaptive openness, nonlinear dynamics, hierarchical organization, and emergent properties. Analytical investigations, constrained by the rigid separation of macro- and microlevel paradigms, struggle to capture the nonlinear interdependencies across levels that define mega-mobility systems. In this review, we systematically advance macro–micro integration with feedback (MMIF) as a transformative paradigm for analyzing urban mega-mobility systems, synthesizing the state-of-the-art developments in typical constituent subsystems under this unified perspective. The MMIF paradigm bridges the gap between theoretical abstraction and empirical practice, contributing to scientifically sound urban development by harmonizing emergent patterns with granular behavioral dynamics. Building upon this paradigm, we investigate the key methods and technologies empowered by artificial intelligence that enable MMIF and critically analyze the enduring challenges and prospective research directions. As urban mobility systems increasingly serve as test beds for complexity science, the MMIF paradigm using artificial intelligence promises to reshape interdisciplinary collaboration, offering a blueprint for building intelligent, adaptive, and human-centric cities.
Short-term traffic flow prediction is a vital branch of the Intelligent Traffic System (ITS) and plays an important role in traffic management. Graph convolution network (GCN) is widely used in traffic prediction models to better deal with the graphical structure data of road networks. However, the influence weights among different road sections are usually distinct in real life, and hard to be manually analyzed. Traditional GCN mechanism, relying on manually-set adjacency matrix, is unable to dynamically learn such spatial pattern during the training. To deal with this drawback, this paper proposes a novel location graph convolutional network (Location-GCN). Location-GCN solves this problem by adding a new learnable matrix into the GCN mechanism, using the absolute value of this matrix to represent the distinct influence levels among different nodes. Then, long short-term memory (LSTM) is employed in the proposed traffic prediction model. Moreover, Trigonometric function encoding is used in this study to enable the short-term input sequence to convey the long-term periodical information. Ultimately, the proposed model is compared with the baseline models and evaluated on two real word traffic flow datasets. The results show our model is more accurate and robust on both datasets than other representative traffic prediction models.
Egor Klimov, Muhammad Umair Ahmed, Nikolai Sviridov
et al.
Bus factor (BF) is a metric that tracks knowledge distribution in a project. It is the minimal number of engineers that have to leave for a project to stall. Despite the fact that there are several algorithms for calculating the bus factor, only a few tools allow easy calculation of bus factor and convenient analysis of results for projects hosted on Git-based providers. We introduce Bus Factor Explorer, a web application that provides an interface and an API to compute, export, and explore the Bus Factor metric via treemap visualization, simulation mode, and chart editor. It supports repositories hosted on GitHub and enables functionality to search repositories in the interface and process many repositories at the same time. Our tool allows users to identify the files and subsystems at risk of stalling in the event of developer turnover by analyzing the VCS history. The application and its source code are publicly available on GitHub at https://github.com/JetBrains-Research/bus-factor-explorer. The demonstration video can be found on YouTube: https://youtu.be/uIoV79N14z8
In the 20th century, individual technology products like the generator, telephone, and automobile were connected to form many of the large-scale, complex, infrastructure networks we know today: the power grid, the communication infrastructure, and the transportation system. Progressively, these networked systems began interacting, forming what is now known as systems-of-systems. Because the component systems in the system-of-systems differ, modeling and analysis techniques with primitives applicable across multiple domains or disciplines are needed. For example, linear graphs and bond graphs have been used extensively in the electrical engineering, mechanical engineering, and mechatronic fields to design and analyze a wide variety of engineering systems. In contrast, hetero-functional graph theory (HFGT) has emerged to study many complex engineering systems and systems-of-systems (e.g. electric power, potable water, wastewater, natural gas, oil, coal, multi-modal transportation, mass-customized production, and personalized healthcare delivery systems). This paper seeks to relate hetero-functional graphs to linear graphs and bond graphs and demonstrate that the former is a generalization of the latter two. The contribution is relayed in three stages. First, the three modeling techniques are compared conceptually. Next, these techniques are contrasted on six example systems: (a) an electrical system, (b) a translational mechanical system, (c) a rotational mechanical system, (d) a fluidic system, (e) a thermal system, and (f) a multi-energy (electro-mechanical) system. Finally, this paper proves mathematically that hetero-functional graphs are a formal generalization of both linear graphs and bond graphs.
[Objective]Frequent cracking faults are observed in the grounding terminals of a specific metro train model during operation. Therefore, it is essential to analyze the grounding terminal cracking causes of the train in the above case. [Method]Line tests are conducted on faulty sections. Through collecting vibration acceleration and stress signals from relevant components, the time-frequency characteristics of different signals are studied, and the grounding terminal cracking causes are analyzed. Through analyzing the dynamic stress variations of the grounding terminals with different bracket and cable cross-section combinations, the compatibility of various structural cable brackets is evaluated. Terminal bracket selection schemes of three positions (grounding shaft end, speed shaft end, and anti-slip shaft end) are analyzed. [Result & Conclusion]Under identical external loads, the short-arm brackets exhibit lower stress levels. The new grounding terminals demonstrate superior structural design compared to the original grounding terminals, and the fatigue life of the former is higher than that of the latter.
As software engineering research becomes more concerned with the psychological, sociological and managerial aspects of software development, relevant theories from reference disciplines are increasingly important for understanding the field's core phenomena of interest. However, the degree to which software engineering research draws on relevant social sciences remains unclear. This study therefore investigates the use of social science theories in five influential software engineering journals over 13 years. It analyzes not only the extent of theory use but also what, how and where these theories are used. While 87 different theories are used, less than two percent of papers use a social science theory, most theories are used in only one paper, most social sciences are ignored, and the theories are rarely tested for applicability to software engineering contexts. Ignoring relevant social science theories may (1) undermine the community's ability to generate, elaborate and maintain a cumulative body of knowledge; and (2) lead to oversimplified models of software engineering phenomena. More attention to theory is needed for software engineering to mature as a scientific discipline.