The Oracle Problem in Software Testing: A Survey
Earl T. Barr, M. Harman, Phil McMinn
et al.
Testing involves examining the behaviour of a system in order to discover potential faults. Given an input for a system, the challenge of distinguishing the corresponding desired, correct behaviour from potentially incorrect behavior is called the “test oracle problem”. Test oracle automation is important to remove a current bottleneck that inhibits greater overall test automation. Without test oracle automation, the human has to determine whether observed behaviour is correct. The literature on test oracles has introduced techniques for oracle automation, including modelling, specifications, contract-driven development and metamorphic testing. When none of these is completely adequate, the final source of test oracle information remains the human, who may be aware of informal specifications, expectations, norms and domain specific information that provide informal oracle guidance. All forms of test oracles, even the humble human, involve challenges of reducing cost and increasing benefit. This paper provides a comprehensive survey of current approaches to the test oracle problem and an analysis of trends in this important area of software testing research and practice.
1053 sitasi
en
Computer Science
Robotics
F. Osório, D. Wolf, K. Branco
et al.
1352 sitasi
en
Engineering
Help or hindrance? The travel, energy and carbon impacts of highly automated vehicles
Z. Wadud, D. MacKenzie, P. Leiby
Experts predict that new automobiles will be capable of driving themselves under limited conditions within 5–10years, and under most conditions within 10–20years. Automation may affect road vehicle energy consumption and greenhouse gas (GHG) emissions in a host of ways, positive and negative, by causing changes in travel demand, vehicle design, vehicle operating profiles, and choices of fuels. In this paper, we identify specific mechanisms through which automation may affect travel and energy demand and resulting GHG emissions and bring them together using a coherent energy decomposition framework. We review the literature for estimates of the energy impacts of each mechanism and, where the literature is lacking, develop our own estimates using engineering and economic analysis. We consider how widely applicable each mechanism is, and quantify the potential impact of each mechanism on a common basis: the percentage change it is expected to cause in total GHG emissions from light-duty or heavy-duty vehicles in the U.S. Our primary focus is travel related energy consumption and emissions, since potential lifecycle impacts are generally smaller in magnitude. We explore the net effects of automation on emissions through several illustrative scenarios, finding that automation might plausibly reduce road transport GHG emissions and energy use by nearly half – or nearly double them – depending on which effects come to dominate. We also find that many potential energy-reduction benefits may be realized through partial automation, while the major energy/emission downside risks appear more likely at full automation. We close by presenting some implications for policymakers and identifying priority areas for further research.
866 sitasi
en
Engineering
An Industry-Based Survey of Reliability in Power Electronic Converters
Shaoyong Yang, A. Bryant, P. Mawby
et al.
1928 sitasi
en
Engineering
Rise of the robots : technology and the threat of a jobless future
M. Ford
859 sitasi
en
Business, Engineering
Industrial Wireless Sensor Networks: Challenges, Design Principles, and Technical Approaches
V. C. Gungor, G. Hancke
1785 sitasi
en
Computer Science, Engineering
Holistic Approach for Human Resource Management in Industry 4.0
Fabian Hecklau, M. Galeitzke, Sebastian Flachs
et al.
Abstract To cope with knowledge and competence challenges related to new technologies and processes of Industry 4.0 new strategic approaches for holistic human resource management are needed in manufacturing companies. Due to the continuous automation of simple manufacturing processes, the number of workspaces with a high level of complexity will increase, which results in the need of high level of education of the staff. The challenge is to qualify employees to shift their capacities to workspaces with more complex processes and ensure the retention of jobs in changing working environments. A strategic approach for employee qualification is described in this contribution.
755 sitasi
en
Engineering
A High Resolution Optical Satellite Image Dataset for Ship Recognition and Some New Baselines
Zikun Liu, Liu Yuan, Lubin Weng
et al.
Institute of Automation Chinese Academy of Sciences, 95 Zhongguancun East Road, 100190, Beijing, China
656 sitasi
en
Computer Science
Reverse engineering and design recovery: a taxonomy
E. Chikofsky, J. Cross
2403 sitasi
en
Engineering, Computer Science
AnalogAgent: Self-Improving Analog Circuit Design Automation with LLM Agents
Zhixuan Bao, Zhuoyi Lin, Jiageng Wang
et al.
Recent advances in large language models (LLMs) suggest strong potential for automating analog circuit design. Yet most LLM-based approaches rely on a single-model loop of generation, diagnosis, and correction, which favors succinct summaries over domain-specific insight and suffers from context attrition that erases critical technical details. To address these limitations, we propose AnalogAgent, a training-free agentic framework that integrates an LLM-based multi-agent system (MAS) with self-evolving memory (SEM) for analog circuit design automation. AnalogAgent coordinates a Code Generator, Design Optimizer, and Knowledge Curator to distill execution feedback into an adaptive playbook in SEM and retrieve targeted guidance for subsequent generation, enabling cross-task transfer without additional expert feedback, databases, or libraries. Across established benchmarks, AnalogAgent achieves 92% Pass@1 with Gemini and 97.4% Pass@1 with GPT-5. Moreover, with compact models (e.g., Qwen-8B), it yields a +48.8% average Pass@1 gain across tasks and reaches 72.1% Pass@1 overall, indicating that AnalogAgent substantially strengthens open-weight models for high-quality analog circuit design automation.
Utilizing LLMs for Industrial Process Automation
Salim Fares
A growing number of publications address the best practices to use Large Language Models (LLMs) for software engineering in recent years. However, most of this work focuses on widely-used general purpose programming languages like Python due to their widespread usage training data. The utility of LLMs for software within the industrial process automation domain, with highly-specialized languages that are typically only used in proprietary contexts, remains underexplored. This research aims to utilize and integrate LLMs in the industrial development process, solving real-life programming tasks (e.g., generating a movement routine for a robotic arm) and accelerating the development cycles of manufacturing systems.
Chatbots in Multivariable Calculus Exams: Innovative Tool or Academic Risk?
Gustavo Navas, Julio Proaño-Orellana, Rogelio Orizondo
et al.
The integration of AI tools like ChatGPT into educational assessments, particularly in the context of Multivariable Calculus, represents a transformative approach to personalized and scalable learning. This study examines the Exams as a Service (EaaS)-Flipped Chatbot Test (FCT) framework, implemented through the AIQuest platform, to explore how chatbots can support assessment processes while addressing risks related to automation and academic integrity. The methodology combines static and dynamic assessment modes within a cloud-based environment that generates, evaluates, and provides feedback on student responses. Quantitative survey data and qualitative written reflections were analyzed using a mixed-methods approach, incorporating Grounded Theory to identify emerging cognitive patterns. The results reveal differences in students’ engagement, performance, and reasoning patterns between AI-assisted and non-AI assessment conditions, highlighting the role of structured AI-generated feedback in supporting reflective and metacognitive processes. Quantitative results indicate higher and more homogeneous performance under the reverse evaluation, while survey responses show generally positive perceptions of feedback usefulness and task appropriateness. This study contributes integrated quantitative and qualitative evidence on the design of AI-assisted evaluation frameworks as formative and diagnostic tools, offering guidance for educators to implement AI-based evaluation systems.
Flight-deck automation: promises and problems
E. Wiener, R. Curry
639 sitasi
en
Computer Science
Transforming Evidence Synthesis: A Systematic Review of the Evolution of Automated Meta-Analysis in the Age of AI
Lingbo Li, Anuradha Mathrani, Teo Susnjak
Exponential growth in scientific literature has heightened the demand for efficient evidence-based synthesis, driving the rise of the field of Automated Meta-analysis (AMA) powered by natural language processing and machine learning. This PRISMA systematic review introduces a structured framework for assessing the current state of AMA, based on screening 978 papers from 2006 to 2024, and analyzing 54 studies across diverse domains. Findings reveal a predominant focus on automating data processing (57%), such as extraction and statistical modeling, while only 17% address advanced synthesis stages. Just one study (2%) explored preliminary full-process automation, highlighting a critical gap that limits AMA's capacity for comprehensive synthesis. Despite recent breakthroughs in large language models (LLMs) and advanced AI, their integration into statistical modeling and higher-order synthesis, such as heterogeneity assessment and bias evaluation, remains underdeveloped. This has constrained AMA's potential for fully autonomous meta-analysis. From our dataset spanning medical (67%) and non-medical (33%) applications, we found that AMA has exhibited distinct implementation patterns and varying degrees of effectiveness in actually improving efficiency, scalability, and reproducibility. While automation has enhanced specific meta-analytic tasks, achieving seamless, end-to-end automation remains an open challenge. As AI systems advance in reasoning and contextual understanding, addressing these gaps is now imperative. Future efforts must focus on bridging automation across all meta-analysis stages, refining interpretability, and ensuring methodological robustness to fully realize AMA's potential for scalable, domain-agnostic synthesis.
AutoEDA: Enabling EDA Flow Automation through Microservice-Based LLM Agents
Yiyi Lu, Hoi Ian Au, Junyao Zhang
et al.
Electronic Design Automation (EDA) remains heavily reliant on tool command language (Tcl) scripting to drive complex RTL-to-GDSII flows. This scripting-based paradigm is labor-intensive, error-prone, and difficult to scale across large design projects. Recent advances in large language models (LLMs) suggest a new paradigm of natural language-driven automation. However, existing EDA efforts remain limited and face key challenges, including the absence of standardized interaction protocols and dependence on external APIs that introduce privacy risks. We present AutoEDA, a framework that leverages the Model Context Protocol (MCP) to enable end-to-end natural language control of RTL-to-GDSII design flows. AutoEDA introduces MCP-based servers for task decomposition, tool selection, and automated error handling, ensuring robust interaction between LLM agents and EDA tools. To enhance reliability and confidentiality, we integrate locally fine-tuned LLM agents. We further contribute a benchmark generation pipeline for diverse EDA scenarios and extend CodeBLEU with Tcl-specific enhancements for domain-aware evaluation. Together, these contributions establish a comprehensive framework for LLM-driven EDA automation, bridging natural language interfaces with modern chip design flows. Empirical results show that AutoEDA achieves up to 9.9 times higher accuracy than naive approaches while reducing token usage by approximately 97% compared to in-context learning.
Deep Representation Learning for Electronic Design Automation
Pratik Shrestha, Saran Phatharodom, Alec Aversa
et al.
Representation learning has become an effective technique utilized by electronic design automation (EDA) algorithms, which leverage the natural representation of workflow elements as images, grids, and graphs. By addressing challenges related to the increasing complexity of circuits and stringent power, performance, and area (PPA) requirements, representation learning facilitates the automatic extraction of meaningful features from complex data formats, including images, grids, and graphs. This paper examines the application of representation learning in EDA, covering foundational concepts and analyzing prior work and case studies on tasks that include timing prediction, routability analysis, and automated placement. Key techniques, including image-based methods, graph-based approaches, and hybrid multimodal solutions, are presented to illustrate the improvements provided in routing, timing, and parasitic prediction. The provided advancements demonstrate the potential of representation learning to enhance efficiency, accuracy, and scalability in current integrated circuit design flows.
Toward an Intent-Based and Ontology-Driven Autonomic Security Response in Security Orchestration Automation and Response
Zequan Huang, Jacques Robin, Nicolas Herbaut
et al.
Modern Security Orchestration, Automation, and Response (SOAR) platforms must rapidly adapt to continuously evolving cyber attacks. Intent-Based Networking has emerged as a promising paradigm for cyber attack mitigation through high-level declarative intents, which offer greater flexibility and persistency than procedural actions. In this paper, we bridge the gap between two active research directions: Intent-Based Cyber Defense and Autonomic Cyber Defense, by proposing a unified, ontology-driven security intent definition leveraging the MITRE-D3FEND cybersecurity ontology. We also propose a general two-tiered methodology for integrating such security intents into decision-theoretic Autonomic Cyber Defense systems, enabling hierarchical and context-aware automated response capabilities. The practicality of our approach is demonstrated through a concrete use case, showcasing its integration within next-generation Security Orchestration, Automation, and Response platforms.
Taming Uncertainty via Automation: Observing, Analyzing, and Optimizing Agentic AI Systems
Dany Moshkovich, Sergey Zeltyn
Large Language Models (LLMs) are increasingly deployed within agentic systems - collections of interacting, LLM-powered agents that execute complex, adaptive workflows using memory, tools, and dynamic planning. While enabling powerful new capabilities, these systems also introduce unique forms of uncertainty stemming from probabilistic reasoning, evolving memory states, and fluid execution paths. Traditional software observability and operations practices fall short in addressing these challenges. This paper presents our vision of AgentOps: a comprehensive framework for observing, analyzing, optimizing, and automating operation of agentic AI systems. We identify distinct needs across four key roles - developers, testers, site reliability engineers (SREs), and business users - each of whom engages with the system at different points in its lifecycle. We present the AgentOps Automation Pipeline, a six-stage process encompassing behavior observation, metric collection, issue detection, root cause analysis, optimized recommendations, and runtime automation. Throughout, we emphasize the critical role of automation in managing uncertainty and enabling self-improving AI systems - not by eliminating uncertainty, but by taming it to ensure safe, adaptive, and effective operation.
High-Resolution Interferometric Temperature Sensor Based on Two DFB Fiber Lasers with High-Temperature Monitoring Potential
Mikhail I. Skvortsov, Kseniya V. Kolosova, Alexander V. Dostovalov
et al.
A high-resolution temperature sensor using the beat frequency measurement between the modes of two DFB fiber lasers is presented. The laser cavities are formed by the femtosecond inscription technique in a highly Er/Yb co-doped phosphosilicate fiber with low optical losses and compact design. The experimental results show a sensitivity of 1 GHz/°C, leading to a temperature resolution of 0.02 °C restricted by the thermistor used in the experiment. The maximum possible resolution determined by the laser linewidth is estimated as 2 × 10<sup>−6</sup> °C. The operation of such a sensor at high temperatures (≈750 °C) with the possibility of further temperature increase is demonstrated. The combination of high resolution and broad temperature range makes the sensor attractive for various applications, especially in high-temperature monitoring.
Applied optics. Photonics
Emerging Technologies in Pretreatment and Hydrolysis for High-Solid-Loading Bioethanol Production from Lignocellulosic Biomass
Nida Arshad, Elizabeth Jayex Panakkal, Palani Bharathy Kalivarathan
et al.
The global reliance on fossil fuels has caused severe environmental challenges, emphasizing the urgent need for sustainable and renewable energy sources. Bioethanol production from lignocellulosic biomass has emerged as a promising alternative due to its abundance, renewability, and carbon-neutral footprint. However, its economic feasibility remains a major obstacle owing to high production costs, particularly those associated with low ethanol titers and the energy-intensive distillation process costs for low titers. High-solid loading processes (≥15% <i>w</i>/<i>w</i> or <i>w</i>/<i>v</i>) have demonstrated potential to overcome these limitations by minimizing water and solvent consumption, enhancing sugar concentrations, increasing ethanol titers, and lowering downstream processing cost. Nevertheless, high-solid loading also introduces operational bottlenecks, such as elevated viscosity, poor mixing, and limited mass and heat transfer, which hinder enzymatic hydrolysis efficiency. This review critically examines emerging pretreatment and enzymatic hydrolysis strategies tailored for high-solid loading conditions. It also explores techniques that improve sugar yields and conversion efficiency while addressing key technical barriers, including enzyme engineering, process integration, and optimization. By evaluating these challenges and potential mitigation strategies, this review provides actionable insights to intensify lignocellulosic ethanol production and advance the development of scalable, cost-effective biorefinery platforms.
Fermentation industries. Beverages. Alcohol