The adoption of Generative AI (GenAI) suggests major changes for software engineering, including technical aspects but also human aspects of the professionals involved. One of these aspects is how individuals perceive themselves regarding their work, i.e., their work identity, and the processes they perform to form, adapt and reject these identities, i.e., identity work. Existent studies provide evidence of such identity work of software professionals triggered by the adoption of GenAI, however they do not consider differences among diverse roles, such as developers and testers. In this paper, we argue the need for considering the role as a factor defining the identity work of software professionals. To support our claim, we review some studies regarding different roles and also recent studies on how to adopt GenAI in software engineering. Then, we propose a research agenda to better understand how the role influences identity work of software professionals triggered by the adoption of GenAI, and, based on that, to propose new artifacts to support this adoption. We also discuss the potential implications for practice of the results to be obtained.
This paper presents a low-cost, fully on-premise Edge Artificial Intelligence (AI) system designed to support real-time pest and disease detection in open-field chili pepper cultivation. The proposed architecture integrates AI-Thinker ESP32-CAM module (ESP32-CAM) image acquisition nodes (“Sticks”) with a Raspberry Pi 5–based edge server (“Module”), forming a plug-and-play Internet of Things (IoT) pipeline that enables autonomous operation upon simple power-up, making it suitable for aging farmers and resource-limited environments. A Leaf-First 2-Stage vision model was developed by combining YOLOv8n-based leaf detection with a lightweight ResNet-18 classifier to improve the diagnostic accuracy for small lesions commonly occurring in dense pepper foliage. To address network instability, which is a major challenge in open-field agriculture, the system adopted a dual-protocol communication design using Hyper Text Transfer Protocol (HTTP) for Joint Photographic Experts Group (JPEG) transmission and Message Queuing Telemetry Transport (MQTT) for event-driven feedback, enhanced by Redis-based asynchronous buffering and state recovery. Deployment-oriented experiments under controlled conditions demonstrated an average end-to-end latency of 0.86 s from image capture to Light Emitting Diode (LED) alert, validating the system’s suitability for real-time decision support in crop management. Compared to heavier models (e.g., YOLOv11 and ResNet-50), the lightweight architecture reduced the computational cost by more than 60%, with minimal loss in detection accuracy. This study highlights the practical feasibility of resource-constrained Edge AI systems for open-field smart farming by emphasizing system-level integration, robustness, and real-time operability, and provides a deployment-oriented framework for future extension to other crops.
The transition to renewable energy sources is expediting due to growing concerns about the harm that fossil fuels are causing to the environment. Because of its availability, affordability, and minimal environmental impact, solar energy stands out among them. However, conventional photovoltaic (PV) systems suffer from efficiency reduction due to high operating temperatures. This limitation has increased interest in hybrid photovoltaic/thermal (PVT) systems, which improve PV performance while producing thermal and electrical energy simultaneously. This study provides an extensive overview of recent advancements in PVT technologies, focusing on system configurations, innovative cooling strategies, and thermal storage materials. Studies published since 2021—including experimental, numerical, and simulation-based works—are examined and classified by climatic adaptability, working fluid, and application. The analysis of this literature concluded that compared to conventional PV, some PVT configurations achieve total efficiencies of up to 76 %, with numerical models showing electrical gains of 3–5 % when validated against experimental data. System performance and application versatility are further improved with the addition of nanofluids (NFs), phase change materials (PCMs), and thermoelectric generators (TEGs). In order to facilitate the design and implementation of PVT systems in a variety of settings, this document provides researchers and practitioners with an updated roadmap.
Abstract Robust and trusted digital human representations are necessary to successfully account for human considerations in model‐based systems engineering (MBSE). Multiple domains and modeling frameworks leverage verification, validation, and accreditation (VV&A) processes to characterize when and under what conditions a model is valid to establish credibility. A literature review was completed on mathematical, physics‐based, software development, discrete event simulation, agent‐based, system dynamics, and MBSE models with the goal of proposing a process for performing VV&A on digital engineering (DE) and MBSE models for sociotechnical systems. However, this research also revealed the need for a broader framework to characterize the risk associated with using these models for making high‐consequence decisions. While accomplishing the literature review, another approach to building credibility was identified that is used heavily in the financial industry, namely model risk management (MRM). This process is extended by leveraging MRM approaches from within the financial community to propose a framework for sociotechnical model users to characterize the risk of using MBSE models to make programmatic decisions. The primary contribution of this work is to document a meta‐analysis of model VV&A while proposing an alternative approach to characterizing and communicating credibility that was discovered during this analysis. This approach could be a viable option for ensuring the credibility of human systems integration in MBSE models.
With the advent of large language models (LLMs) in the artificial intelligence (AI) area, the field of software engineering (SE) has also witnessed a paradigm shift. These models, by leveraging the power of deep learning and massive amounts of data, have demonstrated an unprecedented capacity to understand, generate, and operate programming languages. They can assist developers in completing a broad spectrum of software development activities, encompassing software design, automated programming, and maintenance, which potentially reduces huge human efforts. Integrating LLMs within the SE landscape (LLM4SE) has become a burgeoning trend, necessitating exploring this emergent landscape's challenges and opportunities. The paper aims at revisiting the software development life cycle (SDLC) under LLMs, and highlighting challenges and opportunities of the new paradigm. The paper first summarizes the overall process of LLM4SE, and then elaborates on the current challenges based on a through discussion. The discussion was held among more than 20 participants from academia and industry, specializing in fields such as software engineering and artificial intelligence. Specifically, we achieve 26 key challenges from seven aspects, including software requirement & design, coding assistance, testing code generation, code review, code maintenance, software vulnerability management, and data, training, and evaluation. We hope the achieved challenges would benefit future research in the LLM4SE field.
Global Navigation Satellite Systems (GNSS) aided Inertial Navigation System (INS) is a fundamental approach for attaining continuously available absolute vehicle position and full state estimates at high bandwidth. For transportation applications, stated accuracy specifications must be achieved, unless the navigation system can detect when it is violated. In urban environments, GNSS measurements are susceptible to outliers, which motivates the important problem of accommodating outliers while either achieving a performance specification or communicating that it is not feasible. Risk-Averse Performance-Specified (RAPS) is designed to optimally select measurements to address this problem. Existing RAPS approaches lack a method applicable to carrier phase measurements, which have the benefit of measurement errors at the centimeter level along with the challenge of being biased by integer ambiguities. This paper proposes a RAPS framework that combines Real-time Kinematic (RTK) in a tightly coupled INS for urban navigation applications. Experimental results demonstrate the effectiveness of this RAPS-INS-RTK framework, achieving 85.84% and 92.07% of horizontal and vertical errors less than 1.5 meters and 3 meters, respectively, using a smartphone-grade Inertial Measurement Unit (IMU) from a deep-urban dataset. This performance not only surpasses the Society of Automotive Engineers (SAE) requirements, but also shows a 10% improvement compared to traditional methods.
Atta Ur Rahman, Yousef Alsenani, Adeel Zafar
et al.
Abstract Cardiovascular diseases (CVDs) continue to be the leading cause of more than 17 million mortalities worldwide. The early detection of heart failure with high accuracy is crucial for clinical trials and therapy. Patients will be categorized into various types of heart disease based on characteristics like blood pressure, cholesterol levels, heart rate, and other characteristics. With the use of an automatic system, we can provide early diagnoses for those who are prone to heart failure by analyzing their characteristics. In this work, we deploy a novel self-attention-based transformer model, that combines self-attention mechanisms and transformer networks to predict CVD risk. The self-attention layers capture contextual information and generate representations that effectively model complex patterns in the data. Self-attention mechanisms provide interpretability by giving each component of the input sequence a certain amount of attention weight. This includes adjusting the input and output layers, incorporating more layers, and modifying the attention processes to collect relevant information. This also makes it possible for physicians to comprehend which features of the data contributed to the model's predictions. The proposed model is tested on the Cleveland dataset, a benchmark dataset of the University of California Irvine (UCI) machine learning (ML) repository. Comparing the proposed model to several baseline approaches, we achieved the highest accuracy of 96.51%. Furthermore, the outcomes of our experiments demonstrate that the prediction rate of our model is higher than that of other cutting-edge approaches used for heart disease prediction.
Babangida Modu, Md Pauzi Abdullah, Abdulrahman Alkassem
et al.
The study addresses the integration of hybrid hydrogen (H2) and battery (BT) energy storage systems into a renewable energy microgrid comprising solar photovoltaic (PV) and wind turbine (WT) systems. The research problem focuses on improving the effectiveness and computational efficiency of energy management systems (EMS) while ensuring high system reliability. Despite the existing optimization methods for hybrid microgrids, challenges remain in optimizing energy storage and capacity planning in grid-connected microgrids. To solve this, we propose the use of the Levy Flight Algorithm (LFA) to optimize the capacities of PV, WT, H2 tanks, electrolyzers (EL), fuel cells (FC), and BT, which presents a complex nonlinear optimization challenge. The novelty of this study lies in integrating the LFA with a rule-based EMS, enhancing system reliability and efficiency. The proposed approach significantly reduces the annualized system cost (ASC) and the levelized cost of energy (LCOE). The result demonstrate that the LFA outperforms methods like the Salp Swarm Algorithm (SSA), Particle Swarm Optimization (PSO), Grey Wolf Optimization (GWO) and Genetic Algorithm (GA), yielding cost savings of $3,309, $5,297, $4,484, and $5,129 respectively. The LFA achieves the lowest LCOE at $0.275/kWh, compared to $0.278/kWh with SSA, $0.289/kWh with GA, $0.280/kWh with PSO and $0.283/kWh with GWO. This research contributes to the broader scientific community by providing a more efficient approach to optimizing renewable energy microgrids with hybrid storage systems, thus promoting eco-friendly and cost-effective energy solutions. The proposed system design offers a pathway to future energy systems with high renewable integration, especially as technology advances and costs continue to decrease.
Bacteria express a plethora of efflux pumps that can transport structurally varied molecules, including antimicrobial agents and antibiotics, out of cells. Thus, efflux pump systems participate in lowering intracellular concentrations of antibiotics, which allows phenotypic multidrug-resistant (MDR) bacteria to survive effectively amid higher concentrations of antibiotics. <i>Acinetobacter baumannii</i> is one of the classic examples of pathogens that can carry multiple efflux pump systems, which allows these bacteria to be MDR-to-pan-drug resistant and is now considered a public health threat. Therefore, efflux pumps in <i>A. baumannii</i> have gained major attention worldwide, and there has been increased interest in studying their mechanism of action, substrates, and potential efflux pump inhibitors (EPIs). Efflux pump inhibitors are molecules that can inhibit efflux pumps, rendering pathogens susceptible to antimicrobial agents, and are thus considered potential therapeutic agents for use in conjunction with antibiotics. This review focuses on the types of various efflux pumps detected in <i>A. baumannii</i>, their molecular mechanisms of action, the substrates they transport, and the challenges in developing EPIs that can be clinically useful in reference to <i>A. baumannii</i>.
Quadratization refers to a transformation of an arbitrary system of polynomial ordinary differential equations to a system with at most quadratic right-hand side. Such a transformation unveils new variables and model structures that facilitate model analysis, simulation, and control and offers a convenient parameterization for data-driven approaches. Quadratization techniques have found applications in diverse fields, including systems theory, fluid mechanics, chemical reaction modeling, and mathematical analysis. In this study, we focus on quadratizations that preserve the stability properties of the original model, specifically dissipativity at given equilibria. This preservation is desirable in many applications of quadratization including reachability analysis and synthetic biology. We establish the existence of dissipativity-preserving quadratizations, develop an algorithm for their computation, and demonstrate it in several case studies.
Traffic sensors play a pivotal role in monitoring and assessing network-wide traffic conditions. However, the substantial costs associated with deploying an extensive sensor network across real-world highway systems can often prove prohibitive. Thus, the strategic selection of optimal sensor locations within budget and resource constraints becomes imperative, leading to the well-known Traffic Sensor Location Problem (TSLP). In this study, we introduce a novel framework to address the TSLP for large-scale highway networks, focusing on maximizing information gain in a joint vector space that comprehensively captures both network topology and segment-level features. To solve this optimization problem, we devised a genetic algorithm (GA) with penalty handling. Additionally, we developed a physics-guided random walk algorithm, which not only significantly reduces the search space but offers remarkable flexibility in striking a practical balance between computational load and the confidence of achieving global optimality. For illustration purposes, the proposed framework was applied to the Savannah highway network in Georgia. The results from our GA method align well with those from exhaustive research, but with significantly reduced computational time. By leveraging information theory and maximizing information gain in a low-dimensional vector space, the proposed framework permits parallel, scalable computation and offers considerable potential in the strategic planning and deployment of various sensors for expansive, real-world highway networks.
Most existing chaotic systems have many drawbacks in engineering applications, such as the discontinuous range of chaotic parameters, weak chaotic properties, uneven chaotic sequence outputs, and dynamic degradation. Therefore, based on the above, this paper proposes a new method for the design of a three-dimensional chaotic map. One can obtain the desired number of positive Lyapunov exponents, and can also obtain the desired value of positive Lyapunov exponents. Simulation results show that the proposed system has complex chaotic behavior and high complexity. Finally, the method is implemented into an image encryption transmission scheme and experimental results show that the proposed image encryption scheme can resist brute force attacks, correlation attacks, and differential attacks, so it has a higher security.
Gunnar Kudrjavets, Jeff Thomas, Nachiappan Nagappan
Satisfactory software performance is essential for the adoption and the success of a product. In organizations that follow traditional software development models (e.g., waterfall), Software Performance Engineering (SPE) involves time-consuming experimental modeling and performance testing outside the actual production environment. Such existing SPE methods, however, are not optimized for environments utilizing Continuous Integration (CI) and Continuous Delivery (CD) that result in high frequency and high volume of code changes. We present a summary of lessons learned and propose improvements to the SPE process in the context of CI/CD. Our findings are based on SPE work on products A and B conducted over 5 years at an online services company X. We find that (a) SPE has mainly become a post hoc activity based on data from the production environment, (b) successful application of SPE techniques require frequent re-evaluation of priorities, and (c) engineers working on SPE require a broader skill set than one traditionally possessed by engineers working on performance.
It is essential to discuss the role, difficulties, and opportunities concerning people of different gender in the field of software engineering research, education, and industry. Although some literature reviews address software engineering and gender, it is still unclear how research and practices in Asia exist for handling gender aspects in software development and engineering. We conducted a systematic literature review to grasp the comprehensive view of gender research and practices in Asia. We analyzed the 32 identified papers concerning countries and publication years among 463 publications. Researchers and practitioners from various organizations actively work on gender research and practices in some countries, including China, India, and Turkey. We identified topics and classified them into seven categories varying from personal mental health and team building to organization. Future research directions include investigating the synergy between (regional) gender aspects and cultural concerns and considering possible contributions and dependency among different topics to have a solid foundation for accelerating further research and getting actionable practices.
The need for robotic systems to be verified grows as robots are increasingly used in complex applications with safety implications. Model-driven engineering and domain-specific languages (DSLs) have proven useful in the development of complex systems. RoboChart is a DSL for modelling robot software controllers using state machines and a simple component model. It is distinctive in that it has a formal semantics and support for automated verification. Our work enriches RoboChart with support for modelling architectures and architectural patterns used in the robotics domain. Support is in the shape of an additional DSL, RoboArch, whose primitive concepts encapsulate the notion of a layered architecture and architectural patterns for use in the design of the layers that are only informally described in the literature. A RoboArch model can be used to generate automatically a sketch of a RoboChart model, and the rules for automatic generation define a semantics for RoboArch. Additional patterns can be formalised by extending RoboArch. In this paper, we present RoboArch, and give a perspective of how it can be used in conjunction with CorteX, a software framework developed for the nuclear industry.
Mechanical engineering and machinery, Electronic computers. Computer science
Nestor Jonguitud-Borrego, Nestor Jonguitud-Borrego, Koray Malcı
et al.
The COVID-19 pandemic has become a global challenge for the healthcare systems of many countries with 6 million people having lost their lives and 530 million more having tested positive for the virus. Robust testing and a comprehensive track and trace process for positive patients are essential for effective pandemic control, leading to high demand for diagnostic testing. In order to comply with demand and increase testing capacity worldwide, automated workflows have come into prominence as they enable high-throughput screening, faster processing, exclusion of human error, repeatability, reproducibility and diagnostic precision. The gold standard for COVID-19 testing so far has been RT-qPCR, however, different SARS-CoV-2 testing methods have been developed to be combined with high throughput testing to improve diagnosis. Case studies in China, Spain and the United Kingdom have been reviewed and automation has been proven to be promising for mass testing. Free and Open Source scientific and medical Hardware (FOSH) plays a vital role in this matter but there are some challenges to be overcome before automation can be fully implemented. This review discusses the importance of automated high-throughput testing, the different equipment available, the bottlenecks of its implementation and key selected case studies that due to their high effectiveness are already in use in hospitals and research centres.
Mahboubeh Nabavinia, Baishali Kanjilal, Manoj Pandey
et al.
A heterogenous Palladium anchored Resorcinol-formaldehyde-hyperbranched PEI mesoporous catalyst, made by one-pot synthesis, was used successfully for in situ Suzuki-Miyaura cross coupling synthesis of anticancer prodrug PP-121 from iodoprazole and boronic ester precursors. The mesoporous catalyst with the non-cytotoxic precursors were tested in 2D in vitro model with excellent cytocompatibility and a strong suppression of PC3 cancer cell proliferation, underscored by 50% reduction in PC3 cells viability and 55% reduction in cell metabolism activity and an enhanced rate of early and late apoptosis in flow cytometry, that was induced only by successful in situ pro drug PP121 synthesis from the precursors. The 3D gelatin methacrylate hydrogel encapsulated in vitro cell models underscored the results with a 52% reduction in cell metabolism and underscored apoptosis of PC3 cells when the Pd anchored catalyst was combined with the precursors. In situ application of Suzuki-Miyaura cross coupling of non-cytotoxic precursors to cancer drug, along with their successful encapsulation in an injectable hydrogel could be applied for tumor point drug delivery strategies that can circumvent deleterious side effects and poor bioavailability chemotherapy routes with concomitant enhanced efficacy.
Current cyber-physical systems (CPS) are expected to accomplish complex tasks. To achieve this goal, high performance, but unverified controllers (e.g. deep neural network, black-box controllers from third parties) are applied, which makes it very challenging to keep the overall CPS safe. By sandboxing these controllers, we are not only able to use them but also to enforce safety properties over the controlled physical systems at the same time. However, current available solutions for sandboxing controllers are just applicable to deterministic (a.k.a. non-stochastic) systems, possibly affected by bounded disturbances. In this paper, for the first time we propose a novel solution for sandboxing unverified complex controllers for CPS operating in noisy environments (a.k.a. stochastic CPS). Moreover, we also provide probabilistic guarantees on their safety. Here, the unverified control input is observed at each time instant and checked whether it violates the maximal tolerable probability of reaching the unsafe set. If this probability exceeds a given threshold, the unverified control input will be rejected, and the advisory input provided by the optimal safety controller will be used to maintain the probabilistic safety guarantee. The proposed approach is illustrated empirically and the results indicate that the expected safety probability is guaranteed.
In empirical software engineering, benchmarks can be used for comparing different methods, techniques and tools. However, the recent ACM SIGSOFT Empirical Standards for Software Engineering Research do not include an explicit checklist for benchmarking. In this paper, we discuss benchmarks for software performance and scalability evaluation as example research areas in software engineering, relate benchmarks to some other empirical research methods, and discuss the requirements on benchmarks that may constitute the basis for a checklist of a benchmarking standard for empirical software engineering research.