Dayalan R. Gunasegaram, Najmeh Samadiani, Nathan G. March
et al.
Phenomenological crystal plasticity (CP) models are widely used in Integrated Computational Materials Engineering (ICME) to link microstructural features with engineering-scale mechanical behaviour. Their practical use, however, is limited by the high computational cost of physics-based simulations and the labour-intensive nature of parameter calibration, challenges that are amplified in additively manufactured materials with location-dependent properties. To address these obstacles, we first developed deep neural network (DNN) surrogate models of physics simulations to predict the stress–strain response of an additively manufactured AlSi10Mg alloy. Twenty-five experimentally derived scenarios (five microstructures × five sets of grain orientations) were used for training 25 separate DNNs, with datasets for validated material behaviour generated using the Düsseldorf Advanced Material Simulation Kit (DAMASK) platform and a Fast Fourier Transform (FFT)-based solver. Once trained, the DNNs produced stress–strain curves almost instantaneously, enabling an exhaustive grid-search exploration of a vast parameter space. Our approach yielded significant efficiency gains, which were comprehensively quantified. The best-fit CP parameters obtained through this approach are expected to be more accurate than those derived from conventional trial-and-error calibration, which is restricted to a limited number of candidate values. In addition, the minimum number of CP-FFT simulations required to train the DNNs with sufficient accuracy was identified, reducing the need for costly physics simulations in future studies. The proposed framework enhances the practical utility of CP models for simulation-informed materials engineering and optimisation and is broadly applicable to parameter identification in phenomenological models of other domains.
In this paper, the migration and distribution behavior of Cu, Fe, Si and Mg impurities in the process of preparing high purity indium by the zone melting method in an argon atmosphere was studied. The temperature distribution at the location of the indium samples during the zone melting process was simulated using Fluent software, and the temperature required to maintain the stability of the molten zone length of the samples during the experiment was determined to be 162 °C. The effects of moving speed (0.5 and 1.0 mm/min) of the molten zone and the number of zone melting times (5, 10, 15, 20 times) on the migration and distribution of Cu, Fe, Si and Mg impurities in In were systematically studied. When the moving speed of the molten zone is 0.5 mm/min, after 15 times of zone melting, the contents of impurities Cu, Fe, Si and Mg in the middle of the sample (i.e., 140 mm from the head end) are 0.1208, 0.007, 0.0615 and 0.0211 mg/kg, respectively. The removal rate of Cu, Fe and Si are higher than 80 %, and the removal rate of Mg is higher than 70 %. The experimental results show that the removal effects of Cu, Fe and Si impurities by the zone melting method are obvious, and these impurities are easier to be removed than Mg. By optimizing the moving speed of the molten zone and the number of zone melting times, high purity In with lower content of Cu, Fe and Si impurities was prepared.
Timo Kehrer, Robert Haines, Guido Juckeland
et al.
Anecdotal evidence suggests that Research Software Engineers (RSEs) and Software Engineering Researchers (SERs) often use different terminologies for similar concepts, creating communication challenges. To better understand these divergences, we have started investigating how SE fundamentals from the SER community are interpreted within the RSE community, identifying aligned concepts, knowledge gaps, and areas for potential adaptation. Our preliminary findings reveal opportunities for mutual learning and collaboration, and our systematic methodology for terminology mapping provides a foundation for a crowd-sourced extension and validation in the future.
To address the lack of energy-carbon efficiency evaluation and the underutilization of low-temperature waste heat in traditional direct reduction iron (DRI) production, this paper proposes a novel zero-carbon hydrogen metallurgy system that integrates the recovery and utilization of low-temperature and high-temperature waste heat, internal energy, and cold energy during hydrogen production, storage, reaction and circulation. Firstly, the detailed mathematical models are developed to describe energy and exergy characteristics of the operational components in the proposed zero-carbon hydrogen metallurgy system. Additionally, energy efficiency, exergy efficiency, and energy-carbon efficiency indices are introduced from a full life-cycle perspective of energy flow, avoiding the overlaps in energy inputs and outputs. Subsequently, the efficiency metrics of the proposed zero-carbon hydrogen metallurgy system are then compared with those of traditional DRI production systems with H$_2$/CO ratios of 6:4 and 8:2. The comparative results demonstrate the superiority and advancement of the proposed zero-carbon hydrogen metallurgy system. Finally, sensitivity analysis reveals that the overall electricity energy generated by incorporating the ORC and expander equipments exceeds the heat energy recovered from the furnace top gas, highlighting the energy potential of waste energy utilization.
Lekshmi Murali Rani, Richard Berntsson Svensson, Robert Feldt
The integration of AI for Requirements Engineering (RE) presents significant benefits but also poses real challenges. Although RE is fundamental to software engineering, limited research has examined AI adoption in RE. We surveyed 55 software practitioners to map AI usage across four RE phases: Elicitation, Analysis, Specification, and Validation, and four approaches for decision making: human-only decisions, AI validation, Human AI Collaboration (HAIC), and full AI automation. Participants also shared their perceptions, challenges, and opportunities when applying AI for RE tasks. Our data show that 58.2% of respondents already use AI in RE, and 69.1% view its impact as positive or very positive. HAIC dominates practice, accounting for 54.4% of all RE techniques, while full AI automation remains minimal at 5.4%. Passive AI validation (4.4 to 6.2%) lags even further behind, indicating that practitioners value AI's active support over passive oversight. These findings suggest that AI is most effective when positioned as a collaborative partner rather than a replacement for human expertise. It also highlights the need for RE-specific HAIC frameworks along with robust and responsible AI governance as AI adoption in RE grows.
Daniel Mendez, Paris Avgeriou, Marcos Kalinowski
et al.
Empirical Software Engineering has received much attention in recent years and became a de-facto standard for scientific practice in Software Engineering. However, while extensive guidelines are nowadays available for designing, conducting, reporting, and reviewing empirical studies, similar attention has not yet been paid to teaching empirical software engineering. Closing this gap is the scope of this edited book. In the following editorial introduction, we, the editors, set the foundation by laying out the larger context of the discipline for a positioning of the remainder of this book.
Kevin Hermann, Sven Peldszus, Jan-Philipp Steghöfer
et al.
Software security is of utmost importance for most software systems. Developers must systematically select, plan, design, implement, and especially, maintain and evolve security features -- functionalities to mitigate attacks or protect personal data such as cryptography or access control -- to ensure the security of their software. Although security features are usually available in libraries, integrating security features requires writing and maintaining additional security-critical code. While there have been studies on the use of such libraries, surprisingly little is known about how developers engineer security features, how they select what security features to implement and which ones may require custom implementation, and the implications for maintenance. As a result, we currently rely on assumptions that are largely based on common sense or individual examples. However, to provide them with effective solutions, researchers need hard empirical data to understand what practitioners need and how they view security -- data that we currently lack. To fill this gap, we contribute an exploratory study with 26 knowledgeable industrial participants. We study how security features of software systems are selected and engineered in practice, what their code-level characteristics are, and what challenges practitioners face. Based on the empirical data gathered, we provide insights into engineering practices and validate four common assumptions.
A.A. Rotkovich, D.I. Tishkevich, I.U. Razanau
et al.
Composite materials based on a polymer matrix of linear low-density polyethylene (LLDPE) and W were produced by thermal pressing. The content of W in the samples varied from 0 to 70 %. The recycling properties of LLDPE are demonstrated in this study, which significantly helped to reduce the defects. The microstructure of the composites consists of well-defined W grains covered by elastic LLDPE fibers. A homogeneous distribution of tungsten in the polymer matrix was observed for sample W70. The EDX analysis showed the presence of tungsten and carbon (from the polymer component). The XRD analysis confirms the increase in W content in the samples. The FTIR spectra of the composites showed an increase in the content of terminal methyl groups, a decrease in the molecular weight, and a decrease in the degree of crystallinity of the polyethylene matrix with an increase in the W content in the composite. The sample with 70 % W content has the highest effective density (2.61 g/cm3). The sample relative density ranges from 93.3 to 97.7 %. The porosity of LLDPE-W composites does not exceed 7 %. Gamma radiation shielding efficiency parameters such as LAC, HVL, and MFP were calculated using Phy-X/PSD. The radiation source was Co60, with an emission range of 0.8–2.5 MeV. As the gamma energy increases, it is observed that the values of all the parameters deteriorate. However, the sample with a maximum W content of 70 % has the best values of LAC, HVL, and MFP among the other samples.
Tobias Rudolph, Peter Goerke-Mallet, Andre Homölle
et al.
Integrated geo- and environmental monitoring in mining represents a high-dimensional challenge (location, altitude/depth, time and sensors). This is challenging for experts but poses great problems for a multitude of participants and stakeholders in building up a complete process understanding. The Epe research cooperation aims to elucidate the ground movement at the Epe cavern storage facility with a public participation process. The research cooperation was founded by the city of Gronau, the citizens’ initiative cavern field Epe, the company EFTAS, Münster, and the Research Center of Post-Mining at the Technische Hochschule Georg Agricola, Bochum. This research cooperation is the first in Germany to involve direct collaboration between science and the public. In the cavern field, which has been in operation since the 1970s, brine is extracted, and at the same time natural gas, crude oil and helium, as well as hydrogen in the future, are stored in the subsurface. The technical focus of this work was the development of a high-resolution spatiotemporal analysis of ground movements. The area is monitored annually by the mining company’s mine surveyor. The complexity of the monitoring issue lies in the fact that the western part is a bog area and a former bog area. Furthermore, the soils in the eastern part are very humus-rich and show strong fluctuations in the groundwater and therefore complex hydraulic conditions. At the same time, there are few fixed scatterers or prominent points in the area that allow high-resolution spatiotemporal monitoring using simple radar interferometry methods. Therefore, the SBAS method (Small Baseline Subset), which is based on an aerial method, was used to analyze the radar interferometric datasets. Using an SBAS analysis, it was possible to evaluate a time series of 760 scenes over the period from 2015 to 2023. The results were integrated with the mine survey maps on the ground movement and other open geodata on the surface, the soil layers and the overburden. The results show complex forms of ground movement. The main influence is that of mining. Nevertheless, the influence of organic soils with drying out due to drought years and uplift in wet years is great. Thus, in dry years, ground subsidence accelerates, and in wet years, ground subsidence not only slows down but in some cases also causes uplift. This complexity of ground movements and the necessary understanding of the processes involved has been communicated to the interested public at several public information events as part of the research cooperation. In this way, an understanding of the mining process was built up, and transparency was created in the subsurface use, also as a part of the energy transition. In technical terms, the research cooperation also provides a workflow for developing the annual mine survey maps into an integrated geo- and environmental monitoring system with the development of a transparent participatory geomonitoring process to provide resilience management to a mining location.
Participatory citizen platforms are innovative solutions to digitally better engage citizens in policy-making and deliberative democracy in general. Although these platforms have been used also in an engineering context, thus far, there is no existing work for connecting the platforms to requirements engineering. The present paper fills this notable gap. In addition to discussing the platforms in conjunction with requirements engineering, the paper elaborates potential advantages and disadvantages, thus paving the way for a future pilot study in a software engineering context. With these engineering tenets, the paper also contributes to the research of large socio-technical software systems in a public sector context, including their implementation and governance.
Mehil B Shah, Mohammad Masudur Rahman, Foutse Khomh
Deep learning (DL) techniques have achieved significant success in various software engineering tasks (e.g., code completion by Copilot). However, DL systems are prone to bugs from many sources, including training data. Existing literature suggests that bugs in training data are highly prevalent, but little research has focused on understanding their impacts on the models used in software engineering tasks. In this paper, we address this research gap through a comprehensive empirical investigation focused on three types of data prevalent in software engineering tasks: code-based, text-based, and metric-based. Using state-of-the-art baselines, we compare the models trained on clean datasets with those trained on datasets with quality issues and without proper preprocessing. By analysing the gradients, weights, and biases from neural networks under training, we identify the symptoms of data quality and preprocessing issues. Our analysis reveals that quality issues in code data cause biased learning and gradient instability, whereas problems in text data lead to overfitting and poor generalisation of models. On the other hand, quality issues in metric data result in exploding gradients and model overfitting, and inadequate preprocessing exacerbates these effects across all three data types. Finally, we demonstrate the validity and generalizability of our findings using six new datasets. Our research provides a better understanding of the impact and symptoms of data bugs in software engineering datasets. Practitioners and researchers can leverage these findings to develop better monitoring systems and data-cleaning methods to help detect and resolve data bugs in deep learning systems.
Chang-Yang Hsieh, Shih-Yen Huang, Yu-Ren Chu
et al.
The presence of a second phase in the Mg–8Al–4Ca (at. %) alloy plays a significant role on both its corrosion behavior and the chemical conversion coating processes. Using scanning Kelvin probe force microscopy (SKPFM), a lower Volta-potential of the second phase present on the surface has been measured. The β-Al-Ca phase has a higher electrochemical activity than the α-Mg matrix and may act as the micro-galvanic anode in a local electrochemical corrosion process. Transmission electron microscopy (TEM) examinations reveal that the β-Al-Ca phase is more susceptible to corrosion than the α-Mg matrix in an aqueous solution, and its higher activity and higher corrosion rate accelerate the hydrogen evolution rate on the α-Mg matrix in the cerium (Ce) conversion coating process. It's also been discovered that by immersing the bare Mg–Al–Ca alloy in deionized (DI) water, the β-Al-Ca phase, exposed on the surface, can be dissolved and converted in situ into aluminum hydroxide (Al(OH)3), and the Ce conversion coating can be deposited via replacement reactions in the subsequent conversion coating process. A thicker Ce coating with smaller blisters has then been produced on the DI-treated Ce-coated Mg specimen; it indeed improves the corrosion resistance.
Denis Manuel Roa-García, Mario Enrique Uribe-Macías , Juan José Forero-Ortíz
En la industria musical, en lo que respecta a la gerencia de proyectos, cuenta con poca información que valide la aplicación de las buenas prácticas en la gestión; motivo por el cual, se realizó un análisis prospectivo en la industria musical bajo el contexto artístico en la gestión de proyectos musicales en Ibagué. La metodología de estudio se trabajó bajo una perspectiva cualitativa, con enfoque exploratorio y descriptivo, con un diseño conceptual, visión estratégica y construcción de escenarios. Los resultados de la investigación contribuyeron a que los artistas consideren la importancia de adquirir habilidades gerenciales para fortalecer la industria musical en la gestión de proyectos, al nivel de llegar a obtener una oficina de gestión de proyectos musicales, además de otros alcances en los próximos 10 años (2022 al 2032). Es de vital importancia que los actores sociales generen el cambio y puedan seguir las estrategias planteadas en esta investigación.
Stop words, which are considered non-predictive, are often eliminated in natural language processing tasks. However, the definition of uninformative vocabulary is vague, so most algorithms use general knowledge-based stop lists to remove stop words. There is an ongoing debate among academics about the usefulness of stop word elimination, especially in domain-specific settings. In this work, we investigate the usefulness of stop word removal in a software engineering context. To do this, we replicate and experiment with three software engineering research tools from related work. Additionally, we construct a corpus of software engineering domain-related text from 10,000 Stack Overflow questions and identify 200 domain-specific stop words using traditional information-theoretic methods. Our results show that the use of domain-specific stop words significantly improved the performance of research tools compared to the use of a general stop list and that 17 out of 19 evaluation measures showed better performance. Online appendix: https://zenodo.org/record/7865748