As a natural refrigerant, CO<sub>2</sub> shows significant potential in sustainable thermal engineering due to its environmental safety and economic viability. While the transcritical CO<sub>2</sub> cycle demonstrates strong performance in heating, low-temperature applications, and integration with renewable energy sources, its widespread adoption is hindered by key challenges at the application level. These include: high sensitivity of system efficiency to operating conditions, which creates an “efficiency hump” and narrows the optimal operating window; increased component costs and technical challenges for key devices such as multi-channel valves due to high-pressure requirements; and complex system control with limited intelligent solutions currently integrated. Despite these challenges, the transcritical CO<sub>2</sub> cycle holds unique value in enabling synergistic energy conversion. Its ability to efficiently match and cascade different energy grades makes it particularly suitable for data center cooling, industrial combined cooling and heating, and solar–thermal hybrid systems, positioning it as an indispensable technology in future low-carbon energy systems. To fully realize its potential, development efforts must focus on high-value applications and key technological breakthroughs. Priority should be given to demonstrating its use in fields where it holds a distinct advantage, such as low-temperature refrigeration and high-temperature industrial heat pumps, to establish commercially viable models. Concurrently, core technologies—including adaptive intelligent control algorithms, high-efficiency expanders, and cost-effective pressure-resistant components—must be advanced. Supportive policies, encompassing energy efficiency standards, safety regulations, and fiscal incentives, will be essential to facilitate the transition from demonstration projects to widespread industrial adoption.
Pradeep Raja C, G. Sridevi, Suman Pandipati
et al.
Additive manufacturing using fused deposition modelling (FDM) has emerged as a versatile and resource-efficient route for producing complex polymer and composite structures. However, the quality and sustainability of FDM-printed components are strongly governed by process parameters, nozzle design, and post-processing methods. This review provides a systematic analysis of these factors and their combined influence on mechanical integrity, surface finish, and dimensional accuracy. The study highlights how optimized layer thickness, build orientation, and extrusion temperature enhance interlayer adhesion and structural performance, while advanced nozzle geometries improve melt flow and minimize material waste. Post-processing techniques such as annealing, chemical smoothing, and surface finishing are evaluated for their roles in extending product life cycles and enabling recycled or bio-based polymer feedstocks. By linking process optimization to energy efficiency and material utilization, this review positions FDM as a pathway for sustainable, waste-to-value additive manufacturing. The insights presented support the development of eco-efficient design frameworks for next-generation polymer and composite processing within circular engineering systems.
This chapter serves as an introduction to systems engineering focused on the broad issues surrounding realizing complex integrated systems. What is a system? We pose a number of possible definitions and perspectives, but leave open the opportunity to consider the system from the target context where it will be used. Once we have a system in mind, we acknowledge the fact that this system needs to integrate a variety of pieces, components, subsystems, in order for it to accomplish its task. Therefore, we concern ourselves at the boundaries and interfaces of different technologies and disciplines to determine how best to achieve that integration. Next we raise the specter that this integrated system is complex. Complexity can be defined in a number of ways. For one, the sheer number of subsystems or components can be a measure of complexity. We could also consider the functions being performed by the system and how those functions interact with one another. Further, we could consider computational aspects such as the time or memory that may be needed to accomplish one or more tasks. The extent to which new behaviors might emerge from the system can also be regarded as an element of complexity. In the end, complexity is that characteristic of a system that defines the associated challenges along the life of the system, so we are concerned with how to manage that complexity. Finally, realization refers to the process by which our complex integrated system moves from concept to deployment and subsequent support. It refers to the entire design, development, manufacture, deployment, operation, and support life cycle. Of particular note here, however, is that we focus on systems that, by their very nature, are complex. In other words, we are interested in large, complicated, interacting beasts that are intended to perform difficult tasks and meet a wide variety of end-user needs.
Our research explores the development and application of musical agents, human-in-the-loop generative AI systems designed to support music performance and improvisation within co-creative spaces. We introduce MACAT and MACataRT, two distinct musical agent systems crafted to enhance interactive music-making between human musicians and AI. MACAT is optimized for agent-led performance, employing real-time synthesis and self-listening to shape its output autonomously, while MACataRT provides a flexible environment for collaborative improvisation through audio mosaicing and sequence-based learning. Both systems emphasize training on personalized, small datasets, fostering ethical and transparent AI engagement that respects artistic integrity. This research highlights how interactive, artist-centred generative AI can expand creative possibilities, empowering musicians to explore new forms of artistic expression in real-time, performance-driven and music improvisation contexts.
Hashini Gunatilake, John Grundy, Rashina Hoda
et al.
Empathy plays a critical role in software engineering (SE), influencing collaboration, communication, and user-centred design. Although SE research has increasingly recognised empathy as a key human aspect, there remains no validated instrument specifically designed to measure it within the unique socio-technical contexts of SE. Existing generic empathy scales, while well-established in psychology and healthcare, often rely on language, scenarios, and assumptions that are not meaningful or interpretable for software practitioners. These scales fail to account for the diverse, role-specific, and domain-bound expressions of empathy in SE, such as understanding a non-technical user's frustrations or another practitioner's technical constraints, which differ substantially from empathy in clinical or everyday contexts. To address this gap, we developed and validated two domain-specific empathy scales: EmpathiSEr-P, assessing empathy among practitioners, and EmpathiSEr-U, capturing practitioner empathy towards users. Grounded in a practitioner-informed conceptual framework, the scales encompass three dimensions of empathy: cognitive empathy, affective empathy, and empathic responses. We followed a rigorous, multi-phase methodology, including expert evaluation, cognitive interviews, and two practitioner surveys. The resulting instruments represent the first psychometrically validated empathy scales tailored to SE, offering researchers and practitioners a tool for assessing empathy and designing empathy-enhancing interventions in software teams and user interactions.
This short paper explores how a maritime company develops and integrates large-language models (LLM). Specifically by looking at the requirements engineering for Retrieval Augmented Generation (RAG) systems in expert settings. Through a case study at a maritime service provider, we demonstrate how data scientists face a fundamental tension between user expectations of AI perfection and the correctness of the generated outputs. Our findings reveal that data scientists must identify context-specific "retrieval requirements" through iterative experimentation together with users because they are the ones who can determine correctness. We present an empirical process model describing how data scientists practically elicited these "retrieval requirements" and managed system limitations. This work advances software engineering knowledge by providing insights into the specialized requirements engineering processes for implementing RAG systems in complex domain-specific applications.
The automotive industry generates vast amounts of data from sensors, telemetry, diagnostics, and real-time operations. Efficient data engineering is critical to handle challenges of latency, scalability, and consistency. Modern data lakehouse formats Delta Parquet, Apache Iceberg, and Apache Hudi offer features such as ACID transactions, schema enforcement, and real-time ingestion, combining the strengths of data lakes and warehouses to support complex use cases. This study presents a comparative analysis of Delta Parquet, Iceberg, and Hudi using real-world time-series automotive telemetry data with fields such as vehicle ID, timestamp, location, and event metrics. The evaluation considers modeling strategies, partitioning, CDC support, query performance, scalability, data consistency, and ecosystem maturity. Key findings show Delta Parquet provides strong ML readiness and governance, Iceberg delivers high performance for batch analytics and cloud-native workloads, while Hudi is optimized for real-time ingestion and incremental processing. Each format exhibits tradeoffs in query efficiency, time-travel, and update semantics. The study offers insights for selecting or combining formats to support fleet management, predictive maintenance, and route optimization. Using structured datasets and realistic queries, the results provide practical guidance for scaling data pipelines and integrating machine learning models in automotive applications.
This study explores the application of chaos engineering to enhance the robustness of Large Language Model-Based Multi-Agent Systems (LLM-MAS) in production-like environments under real-world conditions. LLM-MAS can potentially improve a wide range of tasks, from answering questions and generating content to automating customer support and improving decision-making processes. However, LLM-MAS in production or preproduction environments can be vulnerable to emergent errors or disruptions, such as hallucinations, agent failures, and agent communication failures. This study proposes a chaos engineering framework to proactively identify such vulnerabilities in LLM-MAS, assess and build resilience against them, and ensure reliable performance in critical applications.
A computational methodology is delineated to examine fluid concentration, velocity, and temperature as flow characteristics on the nanofluid boundary layer flowing through a porous stretchable subsurface along with permeable media and activation energy. An analysis of the movement, magnetohydrodynamic, and activation energy within the boundary layer is elucidated. The mathematical formulations of ordinary differentiation are obtained from the fundamental governing flow equations by employing similarity transformations. The resultant system of equations was solved numerically using the MATLAB bvp4c computational package. The computational outcomes are illustrated for dimensionless parameters with respect to the characteristics of fluid flow, concentration, and temperature. The computational findings indicate that an increase in the porous and magnetic field parameter reduces flow, while simultaneously enhancing temperature and concentration in this domain. The inclusion of nanofluid results in an augmentation of the thermal conductivity of the fluid flow. The conclusions drawn corroborate a notable concordance with prior research documented in the extant literature. The computational results and insights derived are extensively utilized in various engineering processes such as polymer drawing and extrusion, casting, hot rolling, and metal cooling. The incorporation of magnetic nanoparticles enhances the efficiency of thermal and mass transfer, thereby facilitating precision in oncological treatments, wastewater remediation, and processes that promote energy efficiency. The current investigation has broad implications across various sectors. These include optimizing energy efficiency in industrial processes, developing advanced cooling solutions, energy sectors, and renewable energy systems, and creating novel materials with tailored properties.
Microbial genome editing is crucial for studying and optimizing enzyme functions, yet multiplex chromosomal editing remains challenging. In this study, we employed two Cas9 nickase (Cas9n) mutants, carrying D10A or H840A mutations, and systematically compared their editing efficiencies with wild-type (WT) Cas9 in Erwinia billingiae QL-Z3. While suicide plasmid-mediated knockout showed 1–4 % efficiency and WT Cas9 achieved 35–50 %, both Cas9n systems reached 100 % efficiency for single-gene deletions (0.6–7.4 kb) and insertions (0.7 kb). The dual-gene mutation approach maintained 100 % efficiency, and in triple-gene edits, pCas9n-H840A reached 75 % editing efficiency, whereas pCas9n-D10A consistently achieved 100 % efficiency and stability across knockouts (0.6–25 kb) in diverse bacteria. We further applied this system to delete three ligninolytic genes (EDYP_48, ELAC_205, ESOD_1236), revealing that their disruption significantly reduced enzyme activity involved in the bioconversion of alkaline lignin. Further studies revealed that multiple gene deletions of EDYP_48, ELAC_205 inhibited ferulic acid consumption, while vanillic acid and protocatechuic acid accumulation suggested synergistic interactions among these enzymes and pathway components. Overall, the pCas9n-D10A mediated multiple gene editing emerged as an efficient and streamlined genome engineering strategy to reveal metabolic pathways, poised to accelerate the metabolic engineering for strain modification and cell factory construction in many bacteria.
Jinqi Luo, Tianjiao Ding, Kwan Ho Ryan Chan
et al.
Large Language Models (LLMs) are being used for a wide variety of tasks. While they are capable of generating human-like responses, they can also produce undesirable output including potentially harmful information, racist or sexist language, and hallucinations. Alignment methods are designed to reduce such undesirable outputs via techniques such as fine-tuning, prompt engineering, and representation engineering. However, existing methods face several challenges: some require costly fine-tuning for every alignment task; some do not adequately remove undesirable concepts, failing alignment; some remove benign concepts, lowering the linguistic capabilities of LLMs. To address these issues, we propose Parsimonious Concept Engineering (PaCE), a novel activation engineering framework for alignment. First, to sufficiently model the concepts, we construct a large-scale concept dictionary in the activation space, in which each atom corresponds to a semantic concept. Given any alignment task, we instruct a concept partitioner to efficiently annotate the concepts as benign or undesirable. Then, at inference time, we decompose the LLM activations along the concept dictionary via sparse coding, to accurately represent the activations as linear combinations of benign and undesirable components. By removing the latter ones from the activations, we reorient the behavior of the LLM towards the alignment goal. We conduct experiments on tasks such as response detoxification, faithfulness enhancement, and sentiment revising, and show that PaCE achieves state-of-the-art alignment performance while maintaining linguistic capabilities.
Pressure and residual chlorine concentration are among the key parameters in urban water distribution networks that require continuous monitoring and control. These networks must ensure that consumer water demands are met with adequate pressure while optimizing water quality parameters, such as residual chlorine concentration, to maximize service satisfaction. In this study, the Najaf Abad urban water distribution network was selected as a real large-scale case study. A simultaneous optimization model was developed to determine nodal average pressure, residual chlorine concentration, and network combined reliability. The multi-objective optimization problem was solved using the NSGA-II algorithm under two extreme water consumption scenarios-maximum and minimum water withdrawal during warm and cold seasons. A Pressure-Driven Analysis approach was employed to calculate network parameters. Additionally, three objective functions were optimized using the NSGA-II multi-objective optimization algorithm. The optimal solution was selected from the Pareto front using the TOPSIS method. The network under study includes four operational pressure-reducing valves; after determining their optimal set pressure values, the average network pressure was reduced by 2.9% during ward days and 13.5% during cold days. The average residual chlorine concentration did not undergo significant changes however, its further reduction was prevented through optimization, effectively achieving this objective as well. Lastly, the combined reliability increased by 1.7% and 1.3% for warm and cold days, respectively.
Technology, Water supply for domestic and industrial purposes
Accurate extraction of polarization resistance is crucial in the application of proton exchange membrane fuel cells. It is generally assumed that the steady-state resistance obtained from the polarization curve model is equivalent to the AC impedance obtained from the electrochemical impedance spectroscopy (EIS) when the frequency approaches zero. However, due to the low-frequency stability and nonlinearity issues of the EIS method, this dynamic process leads to an additional rise in polarization resistance compared to the steady-state method. In this paper, a semi-empirical model and equivalent circuit models are developed to extract the steady-state and dynamic polarization resistances, respectively, while a static internal resistance correction method is proposed to represent the systematic error between the two. With the correction, the root mean square error of the steady-state resistance relative to the dynamic polarization resistance decreases from 26.12% to 7.42%, indicating that the weighted sum of the static internal resistance and the steady-state resistance can better correspond to the dynamic polarization resistance. The correction method can also simplify the EIS procedure by directly generating an estimate of the dynamic polarization resistance in the full current interval.
Howard Kleinwaks, Ann Batchelor, Thomas H. Bradley
Abstract The metaphor of “technical debt” is used in software engineering to describe technical solutions that may be pragmatic in the near‐term but may have a negative long‐term impact. Similar decisions and similar dynamics are present in the field of systems engineering. This work investigates the current body of knowledge to identify if, and how, the technical debt metaphor is used within the systems engineering field and which systems engineering lifecycle stages are most susceptible to technical debt. A systematic literature review was conducted on 354 papers in February 2022, of which 18 were deemed relevant for inclusion in the study. The results of the systematic literature review show that the technical debt metaphor is not prevalent within systems engineering research and that existing research is limited to specific fields and theoretical discussions. This paper concludes with recommendations for future work to establish a research agenda on the identification and management of technical debt within systems engineering.
Rudrajit Choudhuri, Dylan Liu, Igor Steinmacher
et al.
Conversational Generative AI (convo-genAI) is revolutionizing Software Engineering (SE) as engineers and academics embrace this technology in their work. However, there is a gap in understanding the current potential and pitfalls of this technology, specifically in supporting students in SE tasks. In this work, we evaluate through a between-subjects study (N=22) the effectiveness of ChatGPT, a convo-genAI platform, in assisting students in SE tasks. Our study did not find statistical differences in participants' productivity or self-efficacy when using ChatGPT as compared to traditional resources, but we found significantly increased frustration levels. Our study also revealed 5 distinct faults arising from violations of Human-AI interaction guidelines, which led to 7 different (negative) consequences on participants.
Rebeca C. Motta, Káthia M. de Oliveira, Guilherme H. Travassos
Context: The Internet of Things (IoT) has brought expectations for software inclusion in everyday objects. However, it has challenges and requires multidisciplinary technical knowledge involving different areas that should be combined to enable IoT software systems engineering. Goal: To present an evidence-based roadmap for IoT development to support developers in specifying, designing, and implementing IoT systems. Method: An iterative approach based on experimental studies to acquire evidence to define the IoT Roadmap. Next, the Systems Engineering Body of Knowledge life cycle was used to organize the roadmap and set temporal dimensions for IoT software systems engineering. Results: The studies revealed seven IoT Facets influencing IoT development. The IoT Roadmap comprises 117 items organized into 29 categories representing different concerns for each Facet. In addition, an experimental study was conducted observing a real case of a healthcare IoT project, indicating the roadmap applicability. Conclusions: The IoT Roadmap can be a feasible instrument to assist IoT software systems engineering because it can (a) support researchers and practitioners in understanding and characterizing the IoT and (b) provide a checklist to identify the applicable recommendations for engineering IoT software systems.
There has been some controversy over the use of radiobiological models when modeling the dose-response curves of ionizing radiation (IR)-induced chromosome aberration and tumor prevalence, as those curves usually show obvious non-targeted effects (NTEs) at low doses of high linear energy transfer (LET) radiation. The lack of understanding the contribution of NTEs to IR-induced carcinogenesis can lead to distinct deviations of relative biological effectiveness (RBE) estimations of carcinogenic potential, which are widely used in radiation risk assessment and radiation protection. In this work, based on the initial pattern of two classes of IR-induced DNA double-strand breaks (DSBs) clustering in chromatin domains and the subsequent incorrect repair processes, we proposed a novel radiobiological model to describe the dose-response curves of two carcinogenic-related endpoints within the same theoretical framework. The representative experimental data was used to verify the consistency and validity of the present model. The fitting results indicated that, compared with targeted effect (TE) and NTE models, the current model has better fitting ability when dealing with the experimental data of chromosome aberration and tumor prevalence induced by multiple types of IR with different LETs. Notably, the present model without introducing an NTE term was adequate to describe the dose-response curves of IR-induced chromosome aberration and tumor prevalence with NTEs in low-dose regions. Based on the fitting parameters, the LET-dependent RBE values were calculated for three given low doses. Our results showed that the RBE values predicted by the current model gradually decrease with the increase of doses for the endpoints of chromosome aberration and tumor prevalence. In addition, the calculated RBE was also compared with those evaluated from other models. These analyses show that the proposed model can be used as an alternative tool to well describe dose-response curves of multiple carcinogenic-related endpoints and effectively estimate RBE in low-dose regions.
Stoica Dorel, Mohammed Gmal Osman, Cristian-Valentin Strejoiu
et al.
This paper presents a comparative analysis of different battery charging strategies for off-grid solar PV systems. The strategies evaluated include constant voltage charging, constant current charging, PWM charging, and hybrid charging. The performance of each strategy is evaluated based on factors such as battery capacity, cycle life, DOD, and charging efficiency, as well as the impact of environmental conditions such as temperature and sunlight. The results show that each charging strategy has its advantages and limitations, and the optimal approach will depend on the specific requirements and limitations of the off-grid solar PV system. This study provides valuable insights into the performance and effectiveness of different battery charging strategies, which can be used to inform the design and operation of off-grid solar PV systems. This paper concludes that the choice of charging strategy depends on the specific requirements and limitations of the off-grid solar PV system and that a careful analysis of the factors that affect performance is necessary to identify the most appropriate approach. The main needs for off-grid solar photovoltaic systems include efficient energy storage, reliable battery charging strategies, environmental adaptability, cost-effectiveness, and user-friendly operation, while the primary limitations affecting these systems encompass intermittent energy supply, battery degradation, environmental variability, initial investment costs, fluctuations in energy demand, and maintenance challenges, emphasizing the importance of careful strategy selection and system design to address these factors. It also provides valuable insights for designing and optimizing off-grid solar PV systems, which can help to improve the efficiency, reliability, and cost-effectiveness of these systems.
Production of electric energy or power. Powerplants. Central stations, Industrial electrochemistry