E. Deelman, K. Vahi, G. Juve et al.
Hasil untuk "Automation"
Menampilkan 20 dari ~433475 hasil · dari arXiv, DOAJ, Semantic Scholar
David Romero, P. Bernus, O. Noran et al.
Robin Dehler, Michael Buchholz
Function offloading is a promising solution to address limitations concerning computational capacity and available energy of Connected Automated Vehicles~(CAVs) or other autonomous robots by distributing computational tasks between local and remote computing devices in form of distributed services. This paper presents a generic function offloading framework that can be used to offload an arbitrary set of computational tasks with a focus on autonomous driving. To provide flexibility, the function offloading framework is designed to incorporate different offloading decision making algorithms and quality of service~(QoS) requirements that can be adjusted to different scenarios or the objectives of the CAVs. With a focus on the applicability, we propose an efficient location-based approach, where the decision whether tasks are processed locally or remotely depends on the location of the CAV. We apply the proposed framework on the use case of service-oriented trajectory planning, where we offload the trajectory planning task of CAVs to a Multi-Access Edge Computing~(MEC) server. The evaluation is conducted in both simulation and real-world application. It demonstrates the potential of the function offloading framework to guarantee the QoS for trajectory planning while improving the computational efficiency of the CAVs. Moreover, the simulation results also show the adaptability of the framework to diverse scenarios involving simultaneous offloading requests from multiple CAVs.
Qiheng He, Yuhan Shang, Yijun Dong et al.
ObjectiveThis study aimed to enhance the Coma Recovery Scale-Revised (CRS-R) for disorders of consciousness (DoC) by developing a two-dimensional model differentiating cognition and motor function.MethodsWe analyzed 124 DoC patients retrospectively and validated findings using five multicenter datasets (n = 420). CRS-R subscores were decomposed into Consciousness_x (awareness) and Consciousness_y (arousal/motor function) using Projective Non-negative Matrix Factorization. Logistic regression established diagnostic thresholds, evaluated by accuracy, precision, recall, and F1-score.ResultsThe model achieved high accuracy (0.94), precision (0.92), and recall (0.99). Patients with minimally conscious state (MCS) or emerged MCS showed significantly higher scores than vegetative state (VS) patients (p < 0.05). The four-quadrant framework revealed distinct clinical profiles: Quadrant I (high awareness/arousal) identified patients for cognitive rehabilitation; Quadrant II (low awareness/high arousal) suggested arousal-enhancing therapies; Quadrant III (low awareness/arousal) indicated VS requiring basic support; Quadrant IV (high awareness/low arousal) highlighted needs for sensorimotor integration.ConclusionsThe two-dimensionally reduced representation of CRS-R scores maintains diagnostic accuracy while improving DoC classification. The four-quadrant model enables personalized interventions.Trial registrationOur study has been verified by the Chinese Clinical Trial Registry with the registration number: ChiCTR2400085855, and the registration date is June 19, 2024.
Анастас К.В.
Современная научная деятельность характеризуется экспоненциальным ростом объема публикаций и данных, что создает значительные трудности в систематизации, анализе и интерпретации информации. В этих условиях технологии искусственного интеллекта (ИИ) становятся ключевым инструментом автоматизации процессов научного исследования. В статье рассматриваются современные подходы к применению методов машинного обучения, глубоких нейронных сетей и обработки естественного языка (NLP) для анализа научной литературы, выявления скрытых закономерностей, генерации гипотез и планирования экспериментальной работы. Особое внимание уделено практическим примерам применения ИИ в биоинформатике, химии, медицине, физике и компьютерных науках, а также анализу ограничений, связанных с интерпретируемостью моделей, надежностью выводов и соблюдением этических норм. Обсуждаются перспективы развития гибридных систем, обеспечивающих совместную работу человека и ИИ, и возможности повышения аналитических компетенций исследователей в условиях цифровизации науки.
P. Tavares, C. M. Costa, Luís Rocha et al.
The optimization of the information flow from the initial design and through the several production stages plays a critical role in ensuring product quality while also reducing the manufacturing costs. As such, in this article we present a cooperative welding cell for structural steel fabrication that is capable of leveraging the Building Information Modeling (BIM) standards to automatically orchestrate the necessary tasks to be allocated to a human operator and a welding robot moving on a linear track. We propose a spatial augmented reality system that projects alignment information into the environment for helping the operator tack weld the beam attachments that will be later on seam welded by the industrial robot. This way we ensure maximum flexibility during the beam assembly stage while also improving the overall productivity and product quality since the operator no longer needs to rely on error prone measurement procedures and he receives his tasks through an immersive interface, relieving him from the burden of analyzing complex manufacturing design specifications. Moreover, no expert robotics knowledge is required to operate our welding cell because all the necessary information is extracted from the Industry Foundation Classes (IFC), namely the CAD models and welding sections, allowing our 3D beam perception systems to correct placement errors or beam bending, which coupled with our motion planning and welding pose optimization system ensures that the robot performs its tasks without collisions and as efficiently as possible while maximizing the welding quality.
Hoang Vu, Henrik Leopold, Han van der Aa
Many organizations strive to increase the level of automation in their business processes. While automation historically was mainly concerned with automating physical labor, current automation efforts mostly focus on automation in a digital manner, thus targeting work that is related to the interaction between humans and computers. This type of automation, commonly referred to as business process automation, has many facets. Yet, academic literature mainly focuses on Robotic Process Automation, a specific automation capability. Recognizing that leading vendors offer automation capabilities going way beyond that, we use this paper to develop a detailed understanding of business process automation in industry. To this end, we conduct a structured market analysis of the 18 predominant vendors of business process automation solutions as identified by Gartner. As a result, we provide a comprehensive overview of the business process automation capabilities currently offered by industrial vendors. We show which types and facets of automation exist and which aspects represent promising directions for the future.
Seth Benzell, Kyle Myers
An increasingly large number of experiments study the labor productivity effects of automation technologies such as generative algorithms. A popular question in these experiments relates to inequality: does the technology increase output more for high- or low-skill workers? The answer is often used to anticipate the distributional effects of the technology as it continues to improve. In this paper, we formalize the theoretical content of this empirical test, focusing on automation experiments as commonly designed. Worker-level output depends on a task-level production function, and workers are heterogeneous in their task-level skills. Workers perform a task themselves, or they delegate it to the automation technology. The inequality effect of improved automation depends on the interaction of two factors: ($i$) the correlation in task-level skills across workers, and ($ii$) workers' skills relative to the technology's capability. Importantly, the sign of the inequality effect is often non-monotonic -- as technologies improve, inequality may decrease then increase, or vice versa. Finally, we use data and theory to highlight cases when skills are likely to be positively or negatively correlated. The model generally suggests that the diversity of automation technologies will play an important role in the evolution of inequality.
Koji Ochiai, Yuya Tahara-Arai, Akari Kato et al.
The automation of experiments in life sciences and chemistry has significantly advanced with the development of various instruments and AI technologies. However, achieving full laboratory automation, where experiments conceived by scientists are seamlessly executed in automated laboratories, remains a challenge. We identify the lack of automation in planning and operational tasks--critical human-managed processes collectively termed "care"--as a major barrier. Automating care is the key enabler for full laboratory automation. To address this, we propose the concept of self-maintainability (SeM): the ability of a laboratory system to autonomously adapt to internal and external disturbances, maintaining operational readiness akin to living cells. A SeM-enabled laboratory features autonomous recognition of its state, dynamic resource and information management, and adaptive responses to unexpected conditions. This shifts the planning and execution of experimental workflows, including scheduling and reagent allocation, from humans to the system. We present a conceptual framework for implementing SeM-enabled laboratories, comprising three modules--Requirement manager, Labware manager, and Device manager--and a Central manager. SeM not only enables scientists to execute envisioned experiments seamlessly but also provides developers with a design concept that drives the technological innovations needed for full automation.
Bo Fu, Mingjie Bi, Shota Umeda et al.
The increasing complexity of modern manufacturing, coupled with demand fluctuation, supply chain uncertainties, and product customization, underscores the need for manufacturing systems that can flexibly update their configurations and swiftly adapt to disturbances. However, current research falls short in providing a holistic reconfigurable manufacturing framework that seamlessly monitors system disturbances, optimizes alternative line configurations based on machine capabilities, and automates simulation evaluation for swift adaptations. This paper presents a dynamic manufacturing line reconfiguration framework to handle disturbances that result in operation time changes. The framework incorporates a system process digital twin for monitoring disturbances and triggering reconfigurations, a capability-based ontology model capturing available agent and resource options, a configuration optimizer generating optimal line configurations, and a simulation generation program initializing simulation setups and evaluating line configurations at approximately 400x real-time speed. A case study of a battery production line has been conducted to evaluate the proposed framework. In two implemented disturbance scenarios, the framework successfully recovers system throughput with limited resources, preventing the 26% and 63% throughput drops that would have occurred without a reconfiguration plan. The reconfiguration optimizer efficiently finds optimal solutions, taking an average of 0.03 seconds to find a reconfiguration plan for a manufacturing line with 51 operations and 40 available agents across 8 agent types.
Siyuan Liu, Yingchao Fan, Qi Hu et al.
Hyperspectral image (HSI) has more spectral information than conventional images, which helps to distinguish targets in a complex scene more accurately. However, HSI typically has a low spatial resolution, which limits their application scenarios. To achieve high-resolution HSI, we propose a spectral and spatial multiscale coupling fusion model (SSMSFuse) for hyperspectral and multispectral image (MSI). SSMSFuse couples the spatial information of MSI and the spectral information of HSI at multiscales by means of a two-branch network structure, thus obtaining the fused images with high spatial and spectral resolution. SSMSFuse consists of two branches, namely the spatial embedding network (Spa-Net) and the spectral embedding network (Spe-Net). Spa-Net is constructed using a multiscale convolutional neural network to better mine multilevel spatial features from MSI. Spe-Net is constructed using self-attention, which can model the long-distance spectral dependencies of HSI to better extract spectral information from HSI. Finally, to achieve interactive coupling of dual-branch information, we designed a spatial–spectral guidance fusion block to fuse features at different scales to avoid loss of spatial and spectral details. Experiments are carried out on four public datasets, and the results show that the proposed method can effectively improve the objective indicators of the fusion results, such as the peak signal to noise ratio, which is increased by 1.36%, and the root mean square error, which is increased by 9.72% on the CAVE dataset, and satisfactory subjective results are also obtained.
Joel Victor Dossa, Chiagoziem C. Ukwuoma, Dara Thomas et al.
This study investigates the nexus between ESG disclosure and firm performance using advanced machine learning models (MLs) to capture complex, non-linear interactions. Analyzing data from Chinese A-share firms (2012–2022), it employs Explainable AI (XAI) tools such as SHAP, heat maps, and Williams plots to enhance model transparency and interpretability. Among several models, the Extra Trees model demonstrated the best predictive performance, revealing that ESG disclosure positively correlates with firm performance, with environmental disclosure exerting the strongest influence. Policymakers are urged to promote standardized, transparent ESG disclosures, particularly focusing on environmental practices while addressing greenwashing to enhance credibility. Investors can prioritize firms with strong environmental practices and use predictive models to refine decision-making. Corporate managers are encouraged to embed sustainability into long-term strategies and utilize ML techniques for improved governance. The study contributes by showcasing the utility of MLs in exploring ESG-performance relationships, offering actionable insights for stakeholders, and providing a foundation for future research. Researchers are encouraged to investigate non-linear ESG impacts across diverse contexts, using broader samples and incorporating market-based measures and ESG rating agencies to improve generalizability. This approach advances understanding of ESG's role in driving firm performance while addressing methodological gaps.
Evariste Sindani, Simon Ntumba Badibanga, Pierre Kafunda Katalay et al.
This study proposes an integrated approach to digitalizing human resources (HR) in African public institutions by developing a performance optimization model. Based on five key variables—processing time, operational cost, service quality, degree of automation, and employee satisfaction—this model aims to enhance the overall efficiency of HR processes. The study is applied to the case of the National Office for Population Identification (ONIP) in the Democratic Republic of Congo and highlights substantial improvements in human resource management. Theoretically, the approach contributes to the digital transformation field through modeling, and practically, by offering a reproducible and adaptable framework for other public organizations with limited resources. Keywords: Digitalization, HR process optimization, ONIP, HR performance, HRIS.
Jan Kellershohn, Sebastian Dickler, Christian Jungbluth
“Bikeability” is a measure of a city’s suitability for a bicycle-based lifestyle. Cities are striving to increase the number of cyclists in their traffic to decrease congestion and increase sustainability. Bikeability is therefore a relevant metric to measure a city’s progress towards this goal. This study is an application of a previously developed programmatic bikeability model. It is used to calculate bikeability for eight different cities in order to compare their bikeability indices. It was found that the bikeability between different cities is more similar than their modal share would suggest. This correlates more strongly with different metrics for measuring city infrastructure quality than with existing studies regarding bikeability. For this reason, this bikeability model is not suited as a replacement for existing indices but has to be evaluated separately. This revealed a disparity between the availability of urban infrastructure, the level of satisfaction with said infrastructure and its statistical use. Possible solutions and options to further develop the model were discussed.
M. Hussein, B. Heijmen, D. Verellen et al.
Radiotherapy treatment planning of complex radiotherapy techniques, such as intensity modulated radiotherapy and volumetric modulated arc therapy, is a resource-intensive process requiring a high level of treatment planner intervention to ensure high plan quality. This can lead to variability in the quality of treatment plans and the efficiency in which plans are produced, depending on the skills and experience of the operator and available planning time. Within the last few years, there has been significant progress in the research and development of intensity modulated radiotherapy treatment planning approaches with automation support, with most commercial manufacturers now offering some form of solution. There is a rapidly growing number of research articles published in the scientific literature on the topic. This paper critically reviews the body of publications up to April 2018. The review describes the different types of automation algorithms, including the advantages and current limitations. Also included is a discussion on the potential issues with routine clinical implementation of such software, and highlights areas for future research.
Eugina Leung, Gabriele Paolacci, S. Puntoni
Automation is transforming many consumption domains, including everyday activities such as cooking or driving, as well as recreational activities like fishing or cycling. Yet little research in marketing examines consumer preferences for automated products. Automation often provides obvious consumption benefits, but six studies spanning a variety of product categories show that automation may not be desirable when identity motives are important drivers of consumption. Using both correlational and experimental designs, these studies demonstrate that people who strongly identify with a particular social category resist automated features that hinder the attribution of identity-relevant consumption outcomes to themselves. The findings have substantial theoretical implications for research on identity and technology, as well as managerial implications for targeting, product innovation, and communication.
D. Kolberg, Joshua Knobloch, D. Zühlke
Valerio De Stefano
Carlos Toxtli
The rapid advancement of Generative Artificial Intelligence (AI), such as Large Language Models (LLMs) and Multimodal Large Language Models (MLLM), has the potential to revolutionize the way we work and interact with digital systems across various industries. However, the current state of software automation, such as Robotic Process Automation (RPA) frameworks, often requires domain expertise and lacks visibility and intuitive interfaces, making it challenging for users to fully leverage these technologies. This position paper argues for the emerging area of Human-Centered Automation (HCA), which prioritizes user needs and preferences in the design and development of automation systems. Drawing on empirical evidence from human-computer interaction research and case studies, we highlight the importance of considering user perspectives in automation and propose a framework for designing human-centric automation solutions. The paper discusses the limitations of existing automation approaches, the challenges in integrating AI and RPA, and the benefits of human-centered automation for productivity, innovation, and democratizing access to these technologies. We emphasize the importance of open-source solutions and provide examples of how HCA can empower individuals and organizations in the era of rapidly progressing AI, helping them remain competitive. The paper also explores pathways to achieve more advanced and context-aware automation solutions. We conclude with a call to action for researchers and practitioners to focus on developing automation technologies that adapt to user needs, provide intuitive interfaces, and leverage the capabilities of high-end AI to create a more accessible and user-friendly future of automation.
Briony Forsberg, Dr Henry Williams, Prof Bruce MacDonald et al.
In this study, state-of-the-art unsupervised detection models were evaluated for the purpose of automated anomaly inspection of wool carpets. A custom dataset of four unique types of carpet textures was created to thoroughly test the models and their robustness in detecting subtle anomalies in complex textures. Due to the requirements of an inline inspection system in a manufacturing use case, the metrics of importance in this study were accuracy in detecting anomalous areas, the number of false detections, and the inference times of each model for real-time performance. Of the evaluated models, the student-teacher network based methods were found on average to yield the highest detection accuracy and lowest false detection rates. When trained on a multi-class dataset the models were found to yield comparable if not better results than single-class training. Finally, in terms of detection speed, with exception to the generative model, all other evaluated models were found to have comparable inference times on a GPU, with an average of 0.16s per image. On a CPU, most of these models typically produced results between 1.5 to 2 times the respective GPU inference times.
Halaman 6 dari 21674