Qualitative research gives rich insights into the quintessentially human aspects of software engineering as a socio-technical system. Qualitative research spans diverse strategies and methods, from interpretivist, in situ observational field studies, to deductive coding of data from mining studies. Advances in large language models and generative AI (GenAI) have prompted claims that artificial intelligence could automate qualitative analysis. Such claims are overgeneralizing from narrow successes. GenAI support must be carefully adapted to the data of interest, but also to the characteristics of a particular research strategy. In this Frontiers of SE paper, we discuss the emerging use of GenAI in relation to the broad spectrum of qualitative research in software engineering. We outline the dimensions of qualitative work in software engineering, review emerging empirical evidence for GenAI assistance, examine the pros and cons of GenAI-mediated qualitative research practices, and revisit qualitative research quality factors, in light of GenAI. Our goal is to inform researchers about the promises and pitfalls of GenAI-assisted qualitative research. We conclude with future plans to advance understanding of its use in software engineering.
Juan M. Murillo, Ignacio García Rodríguez de Guzmán, Enrique Moguel
et al.
The first edition of the QuantumX track, held within the XXIX Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2025), brought together leading Spanish research groups working at the intersection of Quantum Computing and Software Engineering. The event served as a pioneering forum to explore how principles of software quality, governance, testing, orchestration, and abstraction can be adapted to the quantum paradigm. The presented works spanned diverse areas (from quantum service engineering and hybrid architectures to quality models, circuit optimization, and quantum machine learning), reflecting the interdisciplinary nature and growing maturity of Quantum Computing and Quantum Software Engineering. The track also fostered community building and collaboration through the presentation of national and Ibero-American research networks such as RIPAISC and QSpain, and through dedicated networking sessions that encouraged joint initiatives. Beyond reporting on the event, this article provides a structured synthesis of the contributions presented at QuantumX, identifies common research themes and engineering concerns, and outlines a set of open challenges and future directions for the advancement of Quantum Software Engineering. This first QuantumX track established the foundation for a sustained research community and positioned Spain as an emerging contributor to the European and global quantum software ecosystem.
Much of the exotic functionality of prime interest in quantum materials emerges from structural and electronic ground states that can only be accessed at cryogenic temperatures. Understanding device operation therefore requires structural characterization under the same low-temperature conditions at which these functional phases exist, as room-temperature measurements often probe a different structural state. Achieving atomic-resolution in scanning transmission electron microscopy imaging and particularly 4D-STEM electron ptychography at liquid helium temperature has remained extremely challenging because even small amounts of drift, vibration, and thermal instability associated with the cryogen can disrupt the stringent stability requirements of atomic-resolution STEM. In this work we demonstrate atomic-resolution STEM and multislice electron ptychography at temperatures as low as 20 K using a commercial helium cooled holder. We find that rapid scans and a multi-stage registration workflow are critical to reducing artifacts associated with cryogenic instability for atomic-resolution imaging, while for ptychography scan position correction including compensation for coupling between probe aberrations and position refinement is necessary for successful reconstructions. Together these results establish a pathway for reliable atomic-resolution STEM and ptychography at low temperature, enabling direct visualization of structural ground states relevant to quantum technology.
BACKGROUND: Currently, programmed controllers are used as means of regulating low-temperature installations. Controllers have limited operational memory and require simple mathematical models for their application. The use of an approximate solution to complex systems of equations describing non-stationary operating modes of heat exchangers allows for a significant reduction in the requirements for the operational memory of controllers. AIMS: Obtaining an approximate solution to systems of equations describing non-stationary operating modes of heat exchangers. METHODS: Using the coordinate-based parameter concentration method, a system of partial differential equations with respect to coordinate and time is reduced to a system of total differential equations with respect to time. This system of equations has an analytical solution by averaging the thermophysical properties of the flows and the heat-transfer wall over the temperature range under consideration, or a solution using the Runge-Kutta method by taking into account the dependence of changes in thermophysical properties on temperature. RESULTS: Systems of equations have been obtained that describe non-stationary operating modes of heat exchangers, making it possible to simulate the operation of a low-temperature installation and program the controllers used in this installation. CONCLUSIONS: A new method for obtaining approximate solutions to equations systems describing the transient operating conditions of heat exchangers is proposed. Using these solutions, one can easily obtain an analytical or simple numerical solution for describing the transient operating conditions for a low-temperature system comprising several heat exchangers. The resulting time-dependent temperature dependences for heat exchanger flows can be used in programming controllers used for the safe and efficient operation of processes in low-temperature systems.
Peripheral blood mononuclear cells (PBMCs) are important immune cells. However, traditional slow-freezing methods delay the proliferation of PBMCs and damage T-cell subsets. Therefore, there is an urgent need to develop an alternative cooling procedure that can effectively preserve the viability and function of PBMCs. In this study, we optimized the cryopreservation of PBMCs using ultrasonic ice seeding and analyzed post-cryopreservation T-cell subtypes using flow cytometry. An ultrasonic ice-seeding apparatus was constructed to achieve contactless ice seeding by combining an ultrasonic generating device and a controlled-rate freezer. The results showed that the cooling procedure involving ultrasonic ice seeding exhibited superior efficacy compared to the conventional slow-freezing approach. Following cryopreservation, the viability and cumulative proliferation of PBMCs were 94.97% and 204.47%, respectively. The proportion of naive T cells (Tn) after cryopreservation and thawing accounted for up to 18.35%. By incorporating ultrasonic ice seeding, the optimized cryopreservation procedure enhanced the post-thaw viability, cumulative proliferative capacity, and proportion of T-cell subtypes in PBMCs, providing a novel and effective approach for PBMC cryopreservation.
Heating and ventilation. Air conditioning, Low temperature engineering. Cryogenic engineering. Refrigeration
Nitin S. Aher, Umesh Gurnani, Laxmikant S. Dhamande
et al.
Abstract This study focuses on reducing energy consumption in vapor compression refrigeration-based ice plants by incorporating nanoparticle in secondary refrigerants and evaporative cooling technique. Experimental evaluations were performed on a test rig using brine and ethylene glycol as secondary refrigerants. Results showed that brine outperformed glycol, reducing ice formation time by 15% when combined with evaporative cooling. Further enhancement was achieved by incorporating Al2O3 nanoparticles into the brine, which, combined with direct evaporative cooling, reduced ice formation time by 20% compared to glycol. The improved system demonstrated significant performance gains, lowering energy costs and enhancing the Coefficient of Performance (COP).
Low temperature engineering. Cryogenic engineering. Refrigeration
Energy research software (ERS) is a central cornerstone to facilitate energy research. However, ERS is developed by researchers who, in many cases, lack formal training in software engineering. This reduces the quality of ERS, leading to limited reproducibility and reusability. To address these issues, we developed ten central recommendations for the development of ERS, covering areas such as conceptualization, development, testing, and publication of ERS. The recommendations are based on the outcomes of two workshops with a diverse group of energy researchers and aim to improve the awareness of research software engineering in the energy domain. The recommendations should enhance the quality of ERS and, therefore, the reproducibility of energy research.
This paper introduces ModeliHub, a Web-based, federated analytics platform designed specifically for model-based systems engineering with Modelica. ModeliHub's key innovation lies in its Modelica-centric, hub-and-spoke federation architecture that provides systems engineers with a Modelica-based, unified system model of repositories containing heterogeneous engineering artifacts. From this unified system model, ModeliHub's Virtual Twin engine provides a real-time, interactive simulation environment for deploying Modelica simulation models that represent digital twins of the virtual prototype of the system under development at a particular iteration of the iterative systems engineering life cycle. The implementation of ModeliHub is centered around its extensible, Modelica compiler frontend developed in Isomorphic TypeScript that can run seamlessly across browser, desktop and server environments. This architecture aims to strike a balance between rigor and agility, enabling seamless integration and analysis across various engineering domains.
Dyslexia is a common learning disorder that primarily impairs an individual's reading and writing abilities. In adults, dyslexia can affect both professional and personal lives, often leading to mental challenges and difficulties acquiring and keeping work. In Software Engineering (SE), reading and writing difficulties appear to pose substantial challenges for core tasks such as programming. However, initial studies indicate that these challenges may not significantly affect their performance compared to non-dyslexic colleagues. Conversely, strengths associated with dyslexia could be particularly valuable in areas like programming and design. However, there is currently no work that explores the experiences of dyslexic software engineers, and puts their strengths into relation with their difficulties. To address this, we present a qualitative study of the experiences of dyslexic individuals in SE. We followed the basic stage of the Socio-Technical Grounded Theory method and base our findings on data collected through 10 interviews with dyslexic software engineers, 3 blog posts and 153 posts on the social media platform Reddit. We find that dyslexic software engineers especially struggle at the programming learning stage, but can succeed and indeed excel at many SE tasks once they master this step. Common SE-specific support tools, such as code completion and linters are especially useful to these individuals and mitigate many of the experienced difficulties. Finally, dyslexic software engineers exhibit strengths in areas such as visual thinking and creativity. Our findings have implications to SE practice and motivate several areas of future research in SE, such as investigating what makes code less/more understandable to dyslexic individuals.
Sayyed Nagulmeera, Nagul Shareef Shaik, G.Minni
et al.
Alzheimer’s disease (AD) is a progressive neurological disorder characterised by cognitive decline and memory loss. Early diagnosis is essential for effective treatment, although the complexity of the initial symptoms sometimes delays it. This review addresses the development of a deep learning-based model to aid in the early diagnosis of Alzheimer’s disease using neuroimaging data. Using MRI and PET scans from public datasets such as the Alzheimer’s Disease Neuroimaging Initiative, the proposed version makes use of convolutional neural networks (CNNs) to extract feature extraction and types by looking at brain structure in and on models associated with early Alzheimer’s ailment at the version that tries to decide the values Performance is measured using key parameters along with sensitivity, specificity, accuracy, and area under the curve. The purpose is to develop a predictive tool which could help medical doctors diagnose Alzheimer’s disorder in advance and, in all likelihood, enhance the affected person’s effects via timely intervention.
For a long time, it has been recognized that the software industry has a demand for students who are well grounded in design competencies and who are ready to contribute to a project with little additional training. In response to the industry needs, an engineering design course has been developed for senior level students enrolled in the software engineering program in Canada. The goals of the course are to provide a realistic design experience, introduce students to industry culture, improve their time management skills, challenge them technically and intellectually, improve their communication skills, raise student level of professionalism, hone their soft skills, and raise awareness of human factors in software engineering. This work discusses the details of how this design course has been developed and delivered, and the learning outcomes that has been obtained.
Requirements specification patterns have received much attention as they promise to guide the structured specification of natural language requirements. By using them, the intention is to reduce quality problems related to requirements artifacts. Patterns may need to vary in their syntax (e.g. domain details/ parameter incorporation) and semantics according to the particularities of the application domain. However, pattern-based approaches, such as EARS, are designed domain-independently to facilitate their wide adoption across several domains. Little is yet known about how to adopt the principle idea of pattern-based requirements engineering to cover domain-specificity in requirements engineering and, ideally, integrate requirements engineering activities into quality assurance tasks. In this paper, we propose the Pattern-based Domain-specific Requirements Engineering Approach for the specification of functional and performance requirements in a holistic manner. This approach emerges from an academia-industry collaboration and is our first attempt to frame an approach which allows for analyzing domain knowledge and incorporating it into the requirements engineering process enabling automated checks for requirements quality assurance and computer-aided support for system verification. Our contribution is two-fold: First, we present a solution to pattern-based domain-specific requirements engineering and its exemplary integration into quality assurance techniques. Second, we showcase a proof of concept using a tool implementation for the domain of flight controllers for Unmanned Aerial Vehicles. Both shall allow us to outline next steps in our research agenda and foster discussions in this direction.
Ronnie de Souza Santos, Brody Stuart-Verner, Cleyton de Magalhaes
Technology plays a crucial role in people's lives. However, software engineering discriminates against individuals from underrepresented groups in several ways, either through algorithms that produce biased outcomes or for the lack of diversity and inclusion in software development environments and academic courses focused on technology. This reality contradicts the history of software engineering, which is filled with outstanding scientists from underrepresented groups who changed the world with their contributions to the field. Ada Lovelace, Alan Turing, and Clarence Ellis are only some individuals who made significant breakthroughs in the area and belonged to the population that is so underrepresented in undergraduate courses and the software industry. Previous research discusses that women, LGBTQIA+ people, and non-white individuals are examples of students who often feel unwelcome and ostracized in software engineering. However, do they know about the remarkable scientists that came before them and that share background similarities with them? Can we use these scientists as role models to motivate these students to continue pursuing a career in software engineering? In this study, we present the preliminary results of a survey with 128 undergraduate students about this topic. Our findings demonstrate that students' knowledge of computer scientists from underrepresented groups is limited. This creates opportunities for investigations on fostering diversity in software engineering courses using strategies exploring computer science's history.
Aditya Shankar Narayanan, Dheeraj Vagavolu, Nancy A Day
et al.
Diversity with respect to ethnicity and gender has been studied in open-source and industrial settings for software development. Publication avenues such as academic conferences and journals contribute to the growing technology industry. However, there have been very few diversity-related studies conducted in the context of academia. In this paper, we study the ethnic, gender, and geographical diversity of the authors published in Software Engineering conferences and journals. We provide a systematic quantitative analysis of the diversity of publications and organizing and program committees of three top conferences and two top journals in Software Engineering, which indicates the existence of bias and entry barriers towards authors and committee members belonging to certain ethnicities, gender, and/or geographical locations in Software Engineering conferences and journal publications. For our study, we analyse publication (accepted authors) and committee data (Program and Organizing committee/ Journal Editorial Board) from the conferences ICSE, FSE, and ASE and the journals IEEE TSE and ACM TOSEM from 2010 to 2022. The analysis of the data shows that across participants and committee members, there are some communities that are consistently significantly lower in representation, for example, publications from countries in Africa, South America, and Oceania. However, a correlation study between the diversity of the committees and the participants did not yield any conclusive evidence. Furthermore, there is no conclusive evidence that papers with White authors or male authors were more likely to be cited. Finally, we see an improvement in the ethnic diversity of the authors over the years 2010-2022 but not in gender or geographical diversity.
Blockchain technologies for rewards in education are gaining attraction as a promising approach to motivate student learning and promote academic achievement. By providing tangible rewards for educational attainment and engagement, such as digital tokens, educators can motivate learners to take a more active role in their learning and increase their sense of ownership and responsibility for their academic outcomes. In this context, this work proposes the Software Engineering Skill (SES) token as a way of rewarding students in order to improve their experiences in Software Engineering Education (SEE). We performed a proof of concept and conclude that SES token can be deployed in a platform to support SEE.
Software engineering researchers and practitioners have pursued manners to reduce the amount of time and effort required to develop code and increase productivity since the emergence of the discipline. Generative language models are just another step in this journey, but it will probably not be the last one. In this chapter, we propose DAnTE, a Degree of Automation Taxonomy for software Engineering, describing several levels of automation based on the idiosyncrasies of the field. Based on the taxonomy, we evaluated several tools used in the past and in the present for software engineering practices. Then, we give particular attention to AI-based tools, including generative language models, discussing how they are located within the proposed taxonomy, and reasoning about possible limitations they currently have. Based on this analysis, we discuss what novel tools could emerge in the middle and long term.
Context: Software development is human-centric and vulnerable to human error. Human errors are errors in the human thought process. To ensure software quality, practitioners must understand how to manage these human errors. Organizations often change the requirements engineering process to prevent human errors from occurring or to mitigate the harm caused when those errors do occur. While there are studies on human error management in other disciplines, research on the prevention and mitigation of human errors in software engineering, and requirements engineering specifically, are limited. The software engineering studies do not provide strong results about the types of changes that are most effective in requirements engineering. Objective: The goal of this paper is to develop a taxonomy of human error prevention and mitigation strategies based on data from requirements engineering professionals. Method: We performed a qualitative analysis of two practitioner surveys on requirements engineering practices to identify and classify strategies for the prevention and mitigation of human errors. Results: We organized the human error management strategies into a taxonomy based on whether they primarily affect People, Processes, or the Environment. Inside each high-level category, we further organized the strategies into low-level classes. More than 50% of the reported strategies require a change in Process, 23% require a change in Environment, 21% require a change in People, with the remaining 5% too ambiguous to classify. In addition, more than 50\% of the strategies focus on Management activities. Conclusions: The Human Error Management Taxonomy provides a systematic classification and organization of strategies for prevention and mitigation of human errors in requirements engineering. This systematic organization provides a foundation upon which research can build.
In this paper, the adoption patterns of Generative Artificial Intelligence (AI) tools within software engineering are investigated. Influencing factors at the individual, technological, and societal levels are analyzed using a mixed-methods approach for an extensive comprehension of AI adoption. An initial structured interview was conducted with 100 software engineers, employing the Technology Acceptance Model (TAM), the Diffusion of Innovations theory (DOI), and the Social Cognitive Theory (SCT) as guiding theories. A theoretical model named the Human-AI Collaboration and Adaptation Framework (HACAF) was deduced using the Gioia Methodology, characterizing AI adoption in software engineering. This model's validity was subsequently tested through Partial Least Squares - Structural Equation Modeling (PLS-SEM), using data collected from 183 software professionals. The results indicate that the adoption of AI tools in these early integration stages is primarily driven by their compatibility with existing development workflows. This finding counters the traditional theories of technology acceptance. Contrary to expectations, the influence of perceived usefulness, social aspects, and personal innovativeness on adoption appeared to be less significant. This paper yields significant insights for the design of future AI tools and supplies a structure for devising effective strategies for organizational implementation.
This paper introduces prompted software engineering (PSE), which integrates prompt engineering to build effective prompts for language-based AI models, to enhance the software development process. PSE enables the use of AI models in software development to produce high-quality software with fewer resources, automating tedious tasks and allowing developers to focus on more innovative aspects. However, effective prompts are necessary to guide software development in generating accurate, relevant, and useful responses, while mitigating risks of misleading outputs. This paper describes how productive prompts should be built throughout the software development cycle.