Boston University, David Casta˜n´on, Tuesday Wednesday
Stochastic processes are probabilistic models of data streams such as speech, audio and video signals, stock market prices, and measurements of physical phenomena by digital sensors such as medical instruments, GPS receivers, or seismographs. A solid understanding of the mathematical basis of these models is essential for understanding phenomena and processing information in many branches of science and engineering including physics, communications, signal processing, automation, and structural dynamics.
M. Ouzzani, Hossam M. Hammady, Z. Fedorowicz
et al.
BackgroundSynthesis of multiple randomized controlled trials (RCTs) in a systematic review can summarize the effects of individual outcomes and provide numerical answers about the effectiveness of interventions. Filtering of searches is time consuming, and no single method fulfills the principal requirements of speed with accuracy. Automation of systematic reviews is driven by a necessity to expedite the availability of current best evidence for policy and clinical decision-making.We developed Rayyan (http://rayyan.qcri.org), a free web and mobile app, that helps expedite the initial screening of abstracts and titles using a process of semi-automation while incorporating a high level of usability. For the beta testing phase, we used two published Cochrane reviews in which included studies had been selected manually. Their searches, with 1030 records and 273 records, were uploaded to Rayyan. Different features of Rayyan were tested using these two reviews. We also conducted a survey of Rayyan’s users and collected feedback through a built-in feature.ResultsPilot testing of Rayyan focused on usability, accuracy against manual methods, and the added value of the prediction feature. The “taster” review (273 records) allowed a quick overview of Rayyan for early comments on usability. The second review (1030 records) required several iterations to identify the previously identified 11 trials. The “suggestions” and “hints,” based on the “prediction model,” appeared as testing progressed beyond five included studies. Post rollout user experiences and a reflexive response by the developers enabled real-time modifications and improvements. The survey respondents reported 40% average time savings when using Rayyan compared to others tools, with 34% of the respondents reporting more than 50% time savings. In addition, around 75% of the respondents mentioned that screening and labeling studies as well as collaborating on reviews to be the two most important features of Rayyan.As of November 2016, Rayyan users exceed 2000 from over 60 countries conducting hundreds of reviews totaling more than 1.6M citations. Feedback from users, obtained mostly through the app web site and a recent survey, has highlighted the ease in exploration of searches, the time saved, and simplicity in sharing and comparing include-exclude decisions. The strongest features of the app, identified and reported in user feedback, were its ability to help in screening and collaboration as well as the time savings it affords to users.ConclusionsRayyan is responsive and intuitive in use with significant potential to lighten the load of reviewers.
A bstractWe discuss the theoretical bases that underpin the automation of the computations of tree-level and next-to-leading order cross sections, of their matching to parton shower simulations, and of the merging of matched samples that differ by light-parton multiplicities. We present a computer program, MadGraph5 aMC@NLO, capable of handling all these computations — parton-level fixed order, shower-matched, merged — in a unified framework whose defining features are flexibility, high level of parallelisation, and human intervention limited to input physics quantities. We demonstrate the potential of the program by presenting selected phenomenological applications relevant to the LHC and to a 1-TeV e+e− collider. While next-to-leading order results are restricted to QCD corrections to SM processes in the first public version, we show that from the user viewpoint no changes have to be expected in the case of corrections due to any given renormalisable Lagrangian, and that the implementation of these are well under way.
Ranjan Sapkota, Konstantinos I. Roumeliotis, Manoj Karkee
This review critically distinguishes between AI Agents and Agentic AI, offering a structured, conceptual taxonomy, application mapping, and analysis of opportunities and challenges to clarify their divergent design philosophies and capabilities. We begin by outlining the search strategy and foundational definitions, characterizing AI Agents as modular systems driven and enabled by LLMs and LIMs for taskspecific automation. Generative AI is positioned as a precursor providing the foundation, with AI agents advancing through tool integration, prompt engineering, and reasoning enhancements. We then characterize Agentic AI systems, which, in contrast to AI Agents, represent a paradigm shift marked by multi-agent collaboration, dynamic task decomposition, persistent memory, and coordinated autonomy. Through a chronological evaluation of architectural evolution, operational mechanisms, interaction styles, and autonomy levels, we present a comparative analysis across both AI agents and agentic AI paradigms. Application domains enabled by AI Agents such as customer support, scheduling, and data summarization are then contrasted with Agentic AI deployments in research automation, robotic coordination, and medical decision support. We further examine unique challenges in each paradigm including hallucination, brittleness, emergent behavior, and coordination failure, and propose targeted solutions such as ReAct loops, retrieval-augmented generation (RAG), automation coordination layers, and causal modeling. This work aims to provide a roadmap for developing robust, scalable, and explainable AI-driven systems.
Autonomous Industrial Cyber-Physical Systems (ICPS) represent a future vision where industrial systems achieve full autonomy, integrating physical processes seamlessly with communication, computing and control technologies while holistically embedding intelligence. Cloud-Fog Automation is a new digitalized industrial automation reference architecture that has been recently proposed. This architecture is a fundamental paradigm shift from the traditional International Society of Automation (ISA)-95 model to accelerate the convergence and synergy of communication, computing, and control towards a fully autonomous ICPS. With the deployment of new wireless technologies to enable almost-deterministic ultra-reliable low-latency communications, a joint design of optimal control and computing has become increasingly important in modern ICPS. It is also imperative that system-wide cyber-physical security are critically enforced. Despite recent advancements in the field, there are still significant research gaps and open technical challenges. Therefore, a deliberate rethink in co-designing and synergizing communications, computing, and control (which we term "3C co-design") is required. In this paper, we position Cloud-Fog Automation with 3C co-design as the new paradigm to realize the vision of autonomous ICPS. We articulate the state-of-the-art and future directions in the field, and specifically discuss how goal-oriented communication, virtualization-empowered computing, and Quality of Service (QoS)-aware control can drive Cloud-Fog Automation towards a fully autonomous ICPS, while accounting for system-wide cyber-physical security.
Cesar U. Solis, Jorge Morales, Carlos M. Montelongo
This work establishes a simple algorithm to recover an information vector from a predefined database available every time. It is considered that the information analyzed may be incomplete, damaged, or corrupted. This algorithm is inspired by Hopfield Neural Networks (HNN), which allows the recursive reconstruction of an information vector through an energy-minimizing optimal process, but this paper presents a procedure that generates results in a single iteration. Images have been chosen for the information recovery application to build the vector information. In addition, a filter is added to the algorithm to focus on the most important information when reconstructing data, allowing it to work with damaged or incomplete vectors, even without losing the ability to be a non-iterative process. A brief theoretical introduction and a numerical validation for recovery information are shown with an example of a database containing 40 images.
Gonzalo Carracelas, John Hornbuckle, Carlos Ballester
Remote sensing tools have been proposed to assist with rice crop monitoring but have been developed and validated on ponded rice. This two-year study was conducted on a commercial rice farm with irrigation automation technology aimed to (i) understand how canopy reflectance differs between high-yielding ponded and aerobic rice, (ii) validate the feasibility of using the squared simplified canopy chlorophyll content index (SCCCI<sup>2</sup>) for N uptake estimates, and (iii) explore the SCCCI<sup>2</sup> and similar chlorophyll-sensitive indices for grain quality monitoring. Multispectral images were collected from an unmanned aerial vehicle during both rice-growing seasons. Above-ground biomass and nitrogen (N) uptake were measured at panicle initiation (PI). The performance of single-vegetation-index models in estimating rice N uptake, as previously published, was assessed. Yield and grain quality were determined at harvest. Results showed that canopy reflectance in the visible and near-infrared regions differed between aerobic and ponded rice early in the growing season. Chlorophyll-sensitive indices showed lower values in aerobic rice than in the ponded rice at PI, despite having similar yields at harvest. The SCCCI<sup>2</sup> model (RMSE = 20.52, Bias = −6.21 Kg N ha<sup>−1</sup>, and MAPE = 11.95%) outperformed other models assessed. The SCCCI<sup>2</sup>, squared normalized difference red edge index, and chlorophyll green index correlated at PI with the percentage of cracked grain, immature grain, and quality score, suggesting that grain milling quality parameters could be associated with N uptake at PI. This study highlights canopy reflectance differences between high-yielding aerobic (averaging 15 Mg ha<sup>−1</sup>) and ponded rice at key phenological stages and confirms the validity of a single-vegetation-index model based on the SCCCI<sup>2</sup> for N uptake estimates in ponded and non-ponded rice crops.
The statistical retrieval of atmospheric parameters will be greatly affected by the accuracy of the simulated brightness temperatures (BTs) derived from the radiative transfer model. However, it is challenging to further improve a physical-based radiative transfer model (RTM) developed based on the physical mechanisms of wave transmission through the atmosphere. We develop a deep neural network-based RTM (DNN-based RTM) to calculate the simulated BTs for the Microwave Temperature Sounder-II onboard the Fengyun-3D satellite under different weather conditions. The DNN-based RTM is compared in detail with the physical-based RTM in retrieving the atmospheric temperature profiles by the statistical retrieval scheme. Compared to the physical-based RTM, the DNN-based RTM can obtain higher accuracy for simulated BTs and enables the statistical retrieval scheme to achieve higher accuracy in temperature profile retrieval in clear, cloudy, and rainy sky conditions. Due to its ability to simulate microwave observations more accurately, the DNN-based RTM is valuable for the theoretical study of microwave remote sensing and the application of passive microwave observations.
Ahnika Kline, Ana G. Cobián Güemes, Jennifer Yore
et al.
The resurgence of phage therapy in Western societies has been in direct response to recent increases in antimicrobial resistance (AMR) that have ravaged many societies. While phage therapy as a concept has been around for over 100 years, it has largely been replaced by antibiotics due to their relative ease of use and their predictability in spectrum of activity. Now that antibiotics have become less reliable due to greater antibiotic resistance and microbiome disruption, phage therapy has once again become a viable and promising alternative, but it is not without its challenges. Much like the development of antibiotics, with deployment of phage therapeutics there will be a simultaneous need for diagnostics in the clinical laboratory. This review provides an overview of current challenges to widespread adoption of phage therapy with a focus on adoption in the clinical diagnostic laboratory. Current barriers include a lack of standard methodology and quality controls for phage susceptibility testing and selection, the absence of phage-antibiotic synergy testing, and the absence of standard methods to assay phage activity on biofilms. Additionally, there are a number of lab-specific administrative and regulatory barriers to widespread phage therapy adoption including the need for pharmacokinetic (PK) and pharmacodynamic (PD) assays, methods to account for changes in phages after passaging, an absence of regulatory guidance on what will be required for agency approvals of phages and how broad that approval will apply, and the increased need for lab personnel or automation to account for the work of testing large phage libraries against bacteria isolates.
This study introduces an innovative approach to automating Cyber Threat Intelligence (CTI) processes in industrial environments by leveraging Microsoft's AI-powered security technologies. Historically, CTI has heavily relied on manual methods for collecting, analyzing, and interpreting data from various sources such as threat feeds. This study introduces an innovative approach to automating CTI processes in industrial environments by leveraging Microsoft's AI-powered security technologies. Historically, CTI has heavily relied on manual methods for collecting, analyzing, and interpreting data from various sources such as threat feeds, security logs, and dark web forums -- a process prone to inefficiencies, especially when rapid information dissemination is critical. By employing the capabilities of GPT-4o and advanced one-shot fine-tuning techniques for large language models, our research delivers a novel CTI automation solution. The outcome of the proposed architecture is a reduction in manual effort while maintaining precision in generating final CTI reports. This research highlights the transformative potential of AI-driven technologies to enhance both the speed and accuracy of CTI and reduce expert demands, offering a vital advantage in today's dynamic threat landscape.
Mutahira Khalid, Raihana Rahman, Asim Abbas
et al.
Knowledge graphs (KGs) serve as powerful tools for organizing and representing structured knowledge. While their utility is widely recognized, challenges persist in their automation and completeness. Despite efforts in automation and the utilization of expert-created ontologies, gaps in connectivity remain prevalent within KGs. In response to these challenges, we propose an innovative approach termed ``Medical Knowledge Graph Automation (M-KGA)". M-KGA leverages user-provided medical concepts and enriches them semantically using BioPortal ontologies, thereby enhancing the completeness of knowledge graphs through the integration of pre-trained embeddings. Our approach introduces two distinct methodologies for uncovering hidden connections within the knowledge graph: a cluster-based approach and a node-based approach. Through rigorous testing involving 100 frequently occurring medical concepts in Electronic Health Records (EHRs), our M-KGA framework demonstrates promising results, indicating its potential to address the limitations of existing knowledge graph automation techniques.
Teleoperation is a popular solution to remotely support highly automated vehicles through a human remote operator whenever a disengagement of the automated driving system is present. The remote operator wirelessly connects to the vehicle and solves the disengagement through support or substitution of automated driving functions and therefore enables the vehicle to resume automation. There are different approaches to support automated driving functions on various levels, commonly known as teleoperation concepts. A variety of teleoperation concepts is described in the literature, yet there has been no comprehensive and structured comparison of these concepts, and it is not clear what subset of teleoperation concepts is suitable to enable safe and efficient remote support of highly automated vehicles in a broad spectrum of disengagements. The following work establishes a basis for comparing teleoperation concepts through a literature overview on automated vehicle disengagements and on already conducted studies on the comparison of teleoperation concepts and metrics used to evaluate teleoperation performance. An evaluation of the teleoperation concepts is carried out in an expert workshop, comparing different teleoperation concepts using a selection of automated vehicle disengagement scenarios and metrics. Based on the workshop results, a set of teleoperation concepts is derived that can be used to address a wide variety of automated vehicle disengagements in a safe and efficient way.
Luke Strickland, Simon Farrell, Micah K. Wilson
et al.
Abstract In a range of settings, human operators make decisions with the assistance of automation, the reliability of which can vary depending upon context. Currently, the processes by which humans track the level of reliability of automation are unclear. In the current study, we test cognitive models of learning that could potentially explain how humans track automation reliability. We fitted several alternative cognitive models to a series of participants’ judgements of automation reliability observed in a maritime classification task in which participants were provided with automated advice. We examined three experiments including eight between-subjects conditions and 240 participants in total. Our results favoured a two-kernel delta-rule model of learning, which specifies that humans learn by prediction error, and respond according to a learning rate that is sensitive to environmental volatility. However, we found substantial heterogeneity in learning processes across participants. These outcomes speak to the learning processes underlying how humans estimate automation reliability and thus have implications for practice.
As the share of renewable energy generation continues to increase, the new-type power system exhibits the characteristics of coordinated operation between the main grid, distribution networks, and microgrids. The microgrid is primarily concerned with achieving self-balancing between power sources, the network, loads, and storage. In decentralized multi-microgrid (MMG) access scenarios, the aggregation of distributed energy within a region enables the unified optimization of scheduling, which improves regional energy self-sufficiency while mitigating the impact and risks of distributed energy on grid operations. However, the cooperative operation of MMGs involves interactions among various stakeholders, and the absence of a reasonable operational mechanism can result in low energy utilization, uneven resource allocation, and other issues. Thus, designing an effective MMG operation strategy that balances the interests of all stakeholders has become a key area of focus in the industry. This paper examines the definition and structure of MMGs, analyzes their current operational challenges, compiles existing research methods and practical experiences, explores synergistic operational mechanisms and strategies for MMGs under different transaction models, and puts forward prospects for future research directions.
Recent popularity of Large Language Models (LLMs) has opened countless possibilities in automating numerous AI tasks by connecting LLMs to various domain-specific models or APIs, where LLMs serve as dispatchers while domain-specific models or APIs are action executors. Despite the vast numbers of domain-specific models/APIs, they still struggle to comprehensively cover super diverse automation demands in the interaction between human and User Interfaces (UIs). In this work, we build a multimodal model to ground natural language instructions in given UI screenshots as a generic UI task automation executor. This metadata-free grounding model, consisting of a visual encoder and a language decoder, is first pretrained on well studied document understanding tasks and then learns to decode spatial information from UI screenshots in a promptable way. To facilitate the exploitation of image-to-text pretrained knowledge, we follow the pixel-to-sequence paradigm to predict geometric coordinates in a sequence of tokens using a language decoder. We further propose an innovative Reinforcement Learning (RL) based algorithm to supervise the tokens in such sequence jointly with visually semantic metrics, which effectively strengthens the spatial decoding capability of the pixel-to-sequence paradigm. Extensive experiments demonstrate our proposed reinforced UI instruction grounding model outperforms the state-of-the-art methods by a clear margin and shows the potential as a generic UI task automation API.
Emerging vehicle automation and communication systems (VACS) may contribute to the improvement of vehicle travel time and the mitigation of motorway traffic congestion on the basis of appropriate control strategies. This work considers the possibility that automated, or semi-automated, vehicles are equipped with devices that perform (or recommend) lane-changing tasks. The lane-changing strategy MOBIL (minimizing overall braking induced by lane changing) has been chosen for its simplicity and ductility, as well as for the reduced number of parameters that need to be specified (namely, politeness factor and threshold). A wide set of simulations, where MOBIL has been implemented within the microscopic traffic simulator Aimsun for a calibrated motorway network (representing a stretch of motorway A12 in the Netherlands), has been performed. Simulations revealed the impact that the choice of different parameters have on the travel time of different vehicles, allowing also to analyse their behaviour with respect to different traffic conditions (without or with traffic congestion).