Graham Cooper, J. Sweller
Hasil untuk "Automation"
Menampilkan 20 dari ~850688 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar
L. Noldus, A. Spink, R. Tegelenbosch
F. Daerden, D. Lefeber
Louise Davison, Zoë Alice Bell, Hong Gao
Cloning large biosynthetic gene clusters (BGCs) is fundamental to unlocking microbial natural product potential for drug discovery and biotechnology. These clusters encode diverse bioactive compounds, but their size, high GC content, and complex architecture pose significant technical challenges. This review scrutinises recent advances in BGC cloning strategies, categorising them into three major groups: (1) direct release-and-capture methods, (2) genome-integrated preconditioning systems, and (3) CRISPR-assisted hybrid platforms. This review compares the strengths, limitations, and reported efficiencies of BGC cloning strategies, highlighting trade-offs in precision, scalability, and workflow complexity. Emerging trends, such as AI-driven genome mining, modular synthetic biology toolkits, and high-throughput automation, are reshaping the cloning landscape, enabling predictive design and streamlined assembly of clusters exceeding 100 kb. By integrating comparative analysis with future perspectives, this review provides outlines on how next-generation strategies will accelerate heterologous expression, natural product discovery, and sustainable biomanufacturing.
Ming Du, Yanqi Luo, Srutarshi Banerjee et al.
We present Experiment Automation Agents (EAA), a vision-language-model-driven agentic system designed to automate complex experimental microscopy workflows. EAA integrates multimodal reasoning, tool-augmented action, and optional long-term memory to support both autonomous procedures and interactive user-guided measurements. Built on a flexible task-manager architecture, the system enables workflows ranging from fully agent-driven automation to logic-defined routines that embed localized LLM queries. EAA further provides a modern tool ecosystem with two-way compatibility for Model Context Protocol (MCP), allowing instrument-control tools to be consumed or served across applications. We demonstrate EAA at an imaging beamline at the Advanced Photon Source, including automated zone plate focusing, natural language-described feature search, and interactive data acquisition. These results illustrate how vision-capable agents can enhance beamline efficiency, reduce operational burden, and lower the expertise barrier for users.
Ilya Levin
The emergence of generative artificial intelligence (GenAI) represents not an incremental technological advance but a qualitative epistemological shift that challenges foundational assumptions of computer science. Whereas machine learning has been described as the automation of automation, generative AI operates by navigating contextual, semantic, and stylistic coherence rather than optimizing predefined objective metrics. This paper introduces the concept of Vibe-Automation to characterize this transition. The central claim is that the significance of GenAI lies in its functional access to operationalized tacit regularities: context-sensitive patterns embedded in practice that cannot be fully specified through explicit algorithmic rules. Although generative systems do not possess tacit knowledge in a phenomenological sense, they operationalize sensitivities to tone, intent, and situated judgment encoded in high-dimensional latent representations. On this basis, the human role shifts from algorithmic problem specification toward Vibe-Engineering, understood as the orchestration of alignment and contextual judgment in generative systems. The paper connects this epistemological shift to educational and institutional transformation by proposing a conceptual framework structured across three analytical levels and three domains of action: faculty worldview, industry relations, and curriculum design. The risks of mode collapse and cultural homogenization are briefly discussed, emphasizing the need for deliberate engagement with generative systems to avoid regression toward synthetic uniformity.
Automation Surprises, N. Sarter, D. Woods et al.
Gary Klein, David D. Woods, J. Bradshaw et al.
Roly Gutarra Romero, Alma Gabriela Valente Mercado, Luis Ramírez Sirgo
In recent years, the topic of dynamic capabilities has acquired new content. As higher-order competencies, they allow one to constantly update oneself with new knowledge, flexibly recombine resources, and adapt to a rapidly changing environment. A key part of dynamic capabilities is working with the future, starting with basic skills - futures literacy (FL). Since this competence is key to the human resources of organizations, its development seems important, starting with university programs. For a long time, there were no objective tools for measuring the degree of their mastery. The authors of this article attempt to fill this problem by offering an innovative approach to identifying and standardizing the assessment of FL competence. Six theoretical dimensions of FL are proposed as a basis for grouping assessment criteria and compiling final assessments and their interpretation. The corresponding dimensions, such as FL sub-competencies that include foresight, the assessment of future scenarios, and decision-making under uncertainty, can be assessed independently of each other. The ability to measure the initial level of FL will allow for the development of more effective educational programs for the development of this competence.
Wu Zhihua, Peng Chen, Tian Engang et al.
Cyber-physical microgrids, as a representative industrial system, seamlessly integrate computation, communication, control, and physical devices, making it vulnerable to cyber-attacks that can trigger cascading failures and potentially lead to the collapse of the entire grid. This paper aims to develop an efficient detection framework for collusive stealthy attacks in cyber-physical microgrids. First, an ℒ∞ unknown input observer (UIO) is deployed at the control center to monitor communication links between distributed generation units (DGUs). By treating interconnections and secondary control as unknown inputs, the observer gain is designed using only local information. Then, the vulnerability of the ℒ∞ UIO-based monitoring unit is analyzed, and a collusive stealthy attack is devised to disrupt grid operations without alerting the ℒ∞ UIO-based detection mechanism. To counteract the novel attack, a dynamic encoding mechanism is developed for the communication links between DGUs. This mechanism involves encoding control signals prior to transmission and subsequently decoding them at the control center. Furthermore, an in-depth analysis of the feasibility criteria for the encoding matrix has been conducted. Eventually, the efficacy of the proposed detection framework is validated through a series of simulation experiments.
Guoxin Ma, Kang Tian, Hongbo Sun et al.
Abstract Coevolutionary spreading, the interdependent propagation of multiple-type information (or epidemics or social behaviors), has attracted both scientific and industrial attention due to its complex dynamics. While agent-based models (ABMs) are well-suited for modeling single-type contagion dynamics, they struggle to represent the microscopic interdependencies of co-evolving information types within different network topologies. This paper proposes a multi-information co-evolution propagation model based on self-organizing multi-agents, breaking through the limitations of traditional threshold spreading models and agent-based models. The model, which is validated through consistency with traditional SIR models under the circumstance of well-mixed agents, can be used to uncover the spreading mechanisms on different network topologies (such as ER, BA, WS) through a series of transmitting and recovering rules that act on each agent with social contagion behaviors and attributes. Furthermore, sophisticated spreading patterns, such as active counterattack and cooperative operation, are also explored based on this model to simulate the multi-information propagation process. These complex propagation simulations reveal some interesting phenomena: (1) When counterattacking the spread of a specific source information, blindly increasing the proportion of counterattackers or the information exclusion coefficient may not necessarily be the best choice, even without considering costs. (2) In networks with long-short loop structures, compared to the situation of single information dissemination, the coevolutionary spread of two types of information is more prone to avalanche phenomena, with the S (susceptible) state of information dropping sharply from a steady state of 60% to a steady state of 20% by the 10th generation. These findings provide actionable insights for controlling misinformation in social networks and optimizing public health interventions, emphasizing that "more intervention" does not always equate to "better control" in coevolutionary systems.
Qingxi Hu, Xiaoyang Hou, Hekai Shi et al.
Tension-free hernioplasty has effectively reduced postoperative recurrence and mitigated complications by employing polymer patches. However, clinically used polymer patches often fall short in terms of the anti-deformation, anti-adhesion, and tissue integration functions, which can result in visceral adhesions and foreign body reactions after implantation. In this study, a Janus three-layer composite patch was developed for abdominal wall defect repair using a combination of 3D printing, electrospraying, and electrospinning technologies. On the visceral side, a dense electrospun polyvinyl alcohol/sodium hyaluronate (PVA/HA) scaffold was fabricated to inhibit cell adhesion. The middle layer, composed of polycaprolactone (PCL), provided mechanical support. On the muscle-facing side, a loose and porous electrospun nanofiber scaffold was created through electrospraying and electrospinning, promoting cell adhesion and migration to facilitate tissue regeneration. Mechanical testing demonstrated that the composite patch possessed excellent tensile strength (23.58 N/cm), surpassing the clinical standard (16 N/cm). Both in vitro and in vivo evaluations confirmed the patch’s outstanding biocompatibility. Compared with the control PCL patch, the Janus composite patch significantly reduced the visceral adhesion and enhanced the tissue repair in animal models. Collectively, this Janus composite patch integrated anti-deformation, anti-adhesion, and tissue-regenerative properties, providing a promising solution for effective abdominal wall defect repair.
Chao Li, Peilin Li, Chang-Bing Zheng et al.
This paper proposes a composite control method that integrates an active fault-tolerant predictive control scheme and an event-triggered mechanism for networked multi-agent systems. The approach considers random communication constraints in the forward and feedback channels as well as actuator faults. At each time instant, the event trigger determines whether to send system outputs based on the current system state. A Kalman filter is then utilized to estimate both the system state and potential faults by incorporating system output information transmitted through the feedback channel. Concurrently, iterative predictions are performed according to the established system model. Furthermore, a predictive sequence of control inputs is generated through the designed control protocol. Leveraging timestamping technology, the system precisely applies the appropriate control commands to the actuator at designated moments. As a result, the proposed control method compensates for both random communication constraints and actuator faults while effectively reducing data transmission over the communication network. Finally, the proposed method is validated through numerical simulations.
Abdelmoneim Ahmed Eltohamy, Shereen Aly Hussien Aly Abdou
This study investigates the moderating role of perceived utility on the relationship between the advertising mix and mobile app adoption in Egypt, an emerging market. Drawing on the Unified Theory of Acceptance and Use of Technology (UTAUT), the research explores how perceived utility, defined as the extent to which consumers believe a mobile app enhances their performance or provides value, influences the effectiveness of various advertising mixes in mobile app adoption. Using a quantitative research design, data were collected from 418 Egyptian consumers exposed to mobile app advertisements. The findings reveal that the advertising mix and perceived utility significantly impact mobile app adoption, with perceived utility as a positive moderator. Specifically, the study demonstrates that when consumers perceive higher utility in a mobile app, the effectiveness of the advertising mix in mobile adoption increases. This research contributes to the marketing literature by extending UTAUT to include the advertising mix as a determinant of technology adoption in emerging markets. It also provides actionable insights for marketers and policymakers, emphasising the importance of tailoring advertising strategies to enhance perceived utility and improve adoption rates in culturally and economically diverse contexts.
Víctor Mayoral-Vilches
The cybersecurity industry combines "automated" and "autonomous" AI, creating dangerous misconceptions about system capabilities. Recent milestones like XBOW topping HackerOne's leaderboard showcase impressive progress, yet these systems remain fundamentally semi-autonomous--requiring human oversight. Drawing from robotics principles, where the distinction between automation and autonomy is well-established, I take inspiration from prior work and establish a 6-level taxonomy (Level 0-5) distinguishing automation from autonomy in Cybersecurity AI. Current "autonomous" pentesters operate at Level 3-4: they execute complex attack sequences but need human review for edge cases and strategic decisions. True Level 5 autonomy remains aspirational. Organizations deploying mischaracterized "autonomous" tools risk reducing oversight precisely when it's most needed, potentially creating new vulnerabilities. The path forward requires precise terminology, transparent capabilities disclosure, and human-AI partnership-not replacement.
Ilya Kurinov, Miroslav Ivanov, Grzegorz Orzechowski et al.
Forestry forwarders play a central role in mechanized timber harvesting by picking up and moving logs from the felling site to a processing area or a secondary transport vehicle. Forwarder operation is challenging and physically and mentally exhausting for the operator who must control the machine in remote areas for prolonged periods of time. Therefore, even partial automation of the process may reduce stress on the operator. This study focuses on continuing previous research efforts in application of reinforcement learning agents in automating log handling process, extending the task from grasping which was studied in previous research to full log loading operation. The resulting agent will be capable to automate a full loading procedure from locating and grappling to transporting and delivering the log to a forestry forwarder bed. To train the agent, a trailer type forestry forwarder simulation model in NVIDIA's Isaac Gym and a virtual environment for a typical log loading scenario were developed. With reinforcement learning agents and a curriculum learning approach, the trained agent may be a stepping stone towards application of reinforcement learning agents in automation of the forestry forwarder. The agent learnt grasping a log in a random position from grapple's random position and transport it to the bed with 94% success rate of the best performing agent.
Matouš Jelínek, Nadine Schlicker, Ewart de Visser
Calibrated trust in automated systems (Lee and See 2004) is critical for their safe and seamless integration into society. Users should only rely on a system recommendation when it is actually correct and reject it when it is factually wrong. One requirement to achieve this goal is an accurate trustworthiness assessment, ensuring that the user's perception of the system's trustworthiness aligns with its actual trustworthiness, allowing users to make informed decisions about the extent to which they can rely on the system (Schlicker et al. 2022). We propose six design guidelines to help designers optimize for accurate trustworthiness assessments, thus fostering ethical and responsible human-automation interactions. The proposed guidelines are derived from existing literature in various fields, such as human-computer interaction, cognitive psychology, automation research, user-experience design, and ethics. We are incorporating key principles from the field of pragmatics, specifically the cultivation of common ground (H. H. Clark 1996) and Gricean communication maxims (Grice 1975). These principles are essential for the design of automated systems because the user's perception of the system's trustworthiness is shaped by both environmental contexts, such as organizational culture or societal norms, and by situational context, including the specific circumstances or scenarios in which the interaction occurs (Hoff and Bashir 2015). Our proposed guidelines provide actionable insights for designers to create automated systems that make relevant trustworthiness cues available. This would ideally foster calibrated trust and more satisfactory, productive, and safe interactions between humans and automated systems. Furthermore, the proposed heuristics might work as a tool for evaluating to what extent existing systems enable users to accurately assess a system's trustworthiness.
Yutong Xin, Jimmy Xin, Gabriel Poesia et al.
Enabling more concise and modular proofs is essential for advancing formal reasoning using interactive theorem provers (ITPs). Since many ITPs, such as Rocq and Lean, use tactic-style proofs, learning higher-level custom tactics is crucial for proof modularity and automation. This paper presents a novel approach to tactic discovery, which leverages Tactic Dependence Graphs (TDGs) to identify reusable proof strategies across multiple proofs. TDGs capture logical dependencies between tactic applications while abstracting away irrelevant syntactic details, allowing for both the discovery of new tactics and the refactoring of existing proofs into more modular forms. We have implemented this technique in a tool called TacMiner and compare it against an anti-unification-based approach Peano to tactic discovery. Our evaluation demonstrates that TacMiner can learn 3x as many tactics as Peano and reduces the size of proofs by 26% across all benchmarks. Furthermore, our evaluation demonstrates the benefits of learning custom tactics for proof automation, allowing a state-of-the-art proof automation tool to achieve a relative increase of 172% in terms of success rate.
Lukas Laakmann, Seyyid A. Ciftci, Christian Janiesch
Robotic process automation (RPA) is a lightweight approach to automating business processes using software robots that emulate user actions at the graphical user interface level. While RPA has gained popularity for its cost-effective and timely automation of rule-based, well-structured tasks, its symbolic nature has inherent limitations when approaching more complex tasks currently performed by human agents. Machine learning concepts enabling intelligent RPA provide an opportunity to broaden the range of automatable tasks. In this paper, we conduct a literature review to explore the connections between RPA and machine learning and organize the joint concept intelligent RPA into a taxonomy. Our taxonomy comprises the two meta-characteristics RPA-ML integration and RPA-ML interaction. Together, they comprise eight dimensions: architecture and ecosystem, capabilities, data basis, intelligence level, and technical depth of integration as well as deployment environment, lifecycle phase, and user-robot relation.
Ren Qian, Xin Xiong, Jianhua Zhou et al.
In recent years, EEG-based emotion recognition technology has made progress, but there are still problems of low model efficiency and loss of emotional information, and there is still room for improvement in recognition accuracy. To fully utilize EEG’s emotional information and improve recognition accuracy while reducing computational costs, this paper proposes a Convolutional-Recurrent Hybrid Network with a dual-stream adaptive approach and an attention mechanism (CSA-SA-CRTNN). Firstly, the model utilizes a CSAM module to assign corresponding weights to EEG channels. Then, an adaptive dual-stream convolutional-recurrent network (SA-CRNN and MHSA-CRNN) is applied to extract local spatial-temporal features. After that, the extracted local features are concatenated and fed into a temporal convolutional network with a multi-head self-attention mechanism (MHSA-TCN) to capture global information. Finally, the extracted EEG information is used for emotion classification. We conducted binary and ternary classification experiments on the DEAP dataset, achieving 99.26% and 99.15% accuracy for arousal and valence in binary classification and 97.69% and 98.05% in ternary classification, and on the SEED dataset, we achieved an accuracy of 98.63%, surpassing relevant algorithms. Additionally, the model’s efficiency is significantly higher than other models, achieving better accuracy with lower resource consumption.
Halaman 18 dari 42535