The effort devoted to hand-crafting neural network image classifiers has motivated the use of architecture search to discover them automatically. Although evolutionary algorithms have been repeatedly applied to neural network topologies, the image classifiers thus discovered have remained inferior to human-crafted ones. Here, we evolve an image classifier— AmoebaNet-A—that surpasses hand-designs for the first time. To do this, we modify the tournament selection evolutionary algorithm by introducing an age property to favor the younger genotypes. Matching size, AmoebaNet-A has comparable accuracy to current state-of-the-art ImageNet models discovered with more complex architecture-search methods. Scaled to larger size, AmoebaNet-A sets a new state-of-theart 83.9% top-1 / 96.6% top-5 ImageNet accuracy. In a controlled comparison against a well known reinforcement learning algorithm, we give evidence that evolution can obtain results faster with the same hardware, especially at the earlier stages of the search. This is relevant when fewer compute resources are available. Evolution is, thus, a simple method to effectively discover high-quality architectures.
Alex A. Pollen, A. Bhaduri, Madeline G. Andrews
et al.
Direct comparisons of human and non-human primate brain tissue have the potential to reveal molecular pathways underlying remarkable specializations of the human brain. However, chimpanzee tissue is largely inaccessible during neocortical neurogenesis when differences in brain size first appear. To identify human-specific features of cortical development, we leveraged recent innovations that permit generating pluripotent stem cell-derived cerebral organoids from chimpanzee. First, we systematically evaluated the fidelity of organoid models to primary human and macaque cortex, finding organoid models preserve gene regulatory networks related to cell types and developmental processes but exhibit increased metabolic stress. Second, we identified 261 genes differentially expressed in human compared to chimpanzee organoids and macaque cortex. Many of these genes overlap with human-specific segmental duplications and a subset suggest increased PI3K/AKT/mTOR activation in human outer radial glia. Together, our findings establish a platform for systematic analysis of molecular changes contributing to human brain development and evolution.
Fungal pathogens cause more than a billion human infections every year, resulting in more than 1.6 million deaths annually. Understanding the natural history and evolutionary ecology of fungi is helping us understand how disease-relevant traits have repeatedly evolved. Different types and mechanisms of genetic variation have contributed to the evolution of fungal pathogenicity and specific genetic differences distinguish pathogens from non-pathogens. Insights into the traits, genetic elements, and genetic and ecological mechanisms that contribute to the evolution of fungal pathogenicity are crucial for developing strategies to both predict emergence of fungal pathogens and develop drugs to combat them. Understanding the mechanisms and evolution of pathogenicity in fungi will bring us a step closer to reducing the annual toll of 1.6 million deaths from fungal disease.
Heming Jia, Marjan Kordani, Iman Ahmadianfar
et al.
Precise forecasting of water quality indices (WQI) is essential for safeguarding ecosystems, human health, and sustainable water resource management. This study presents an innovative approach for evaluating river Water Quality Indices using advanced machine learning methods. The approach combines the least squares support vector machine (LSSVM) with the Sherman–Morrison–Woodbury (SMW) formula and local weighting techniques to improve the model's capacity to identify local trends and nonlinearities. The hybrid model, SMW-LSSVM-R, integrates the advantages of SMW-LSSVM with ridge regression to provide a balanced and resilient predictive framework. The model parameters are improved by a self-adaptive teaching-learning-based differential evolution (SATLDE) method, attaining optimal performance. Additionally, SATLDE is combined with a ridge feature selection model to identify the key input factors and boost accuracy. The model also employs optimized multivariate variational mode decomposition (OMVMD) using SATLDE algorithm to more effectively assess complex data patterns. When the models were tested at two Iranian stations, Farisat and Molasani, the SMW-LSSVM-R model with a testing R value of 0.975 and an RMSE of 0.990, exhibited better performance than the basic and OMVMD-enhanced models. These findings demonstrate the potential of the proposed hybrid model to offer valuable insights into environmental monitoring and management.
Rapid regional development and intensified human activities increasingly disturb ecosystems, posing substantial threats to the integrity of large-scale ecological zones. As a World Natural Heritage site and a crucial national ecological zone, the Zhangjiajie Scenic Area plays a pivotal role in China’s environmental conservation efforts. To comprehensively assess tourism ecological security in the Scenic Area and strengthen the scientific basis for resource management and policymaking, this study developed a multi-dimensional ecological security evaluation system covering 2010–2024, incorporating dynamic changes in perturbation, reaction, and governance. Using entropy weight–TOPSIS and coupling coordination models, combined with obstacle degree analysis, we examined the temporal trajectory of ecological security and analyzed its underlying driving mechanisms. The study also examined factors influencing the sustainable development of the ecosystem. The results indicate the following: (1) Tourism ecological security in the Scenic Area followed a V-shaped trajectory of “rapid degradation—steady recovery—impact and rebound.” It declined sharply to an unsafe level between 2010 and 2014, steadily recovered from 2015 to 2019, briefly dropped in 2020, and then rebounded, reaching a peak evaluation value of 0.519 in 2024. (2) The co-evolution of perturbation, reaction, and governance subsystems has matured: their coupling coordination degree has increased annually and has remained at the level of “intermediate coordination” since 2020. The reaction subsystem plays a central role, serving as a bridge between perturbation and governance. (3) The driving factors exhibit a phased evolutionary pattern of “elements—facilities—structure—function.” Cultivated land area, total road mileage, and artificial afforestation area constitute the main long-term constraints. This research provides important insights for strengthening ecological security and sustainability in the Scenic Area while advancing regional ecosystem development. It also offers valuable guidance for ecological security management and policymaking in similar nature reserves.
Jeremy C. -H. Wang, Ming Hou, David Dunwoody
et al.
This paper examines how trust is formed, maintained, or diminished over time in the context of human-autonomy teaming with an optionally piloted aircraft. Whereas traditional factor-based trust models offer a static representation of human confidence in technology, here we discuss how variations in the underlying factors lead to variations in trust, trust thresholds, and human behaviours. Over 200 hours of flight test data collected over a multi-year test campaign from 2021 to 2023 were reviewed. The dispositional-situational-learned, process-performance-purpose, and IMPACTS homeostasis trust models are applied to illuminate trust trends during nominal autonomous flight operations. The results offer promising directions for future studies on trust dynamics and design-for-trust in human-autonomy teaming.
Adrian Arnaiz-Rodriguez, Nina Corvelo Benz, Suhas Thejaswi
et al.
Data-driven algorithmic matching systems promise to help human decision makers make better matching decisions in a wide variety of high-stakes application domains, such as healthcare and social service provision. However, existing systems are not designed to achieve human-AI complementarity: decisions made by a human using an algorithmic matching system are not necessarily better than those made by the human or by the algorithm alone. Our work aims to address this gap. To this end, we propose collaborative matching (comatch), a data-driven algorithmic matching system that takes a collaborative approach: rather than making all the matching decisions for a matching task like existing systems, it selects only the decisions that it is the most confident in, deferring the rest to the human decision maker. In the process, comatch optimizes how many decisions it makes and how many it defers to the human decision maker to provably maximize performance. We conduct a large-scale human subject study with $800$ participants to validate the proposed approach. The results demonstrate that the matching outcomes produced by comatch outperform those generated by either human participants or by algorithmic matching on their own. The data gathered in our human subject study and an implementation of our system are available as open source at https://github.com/Networks-Learning/human-AI-complementarity-matching.
Healthcare workers (HCWs) encounter challenges in hospitals, such as retrieving medical supplies quickly from crash carts, which could potentially result in medical errors and delays in patient care. Robotic crash carts (RCCs) have shown promise in assisting healthcare teams during medical tasks through guided object searches and task reminders. Limited exploration has been done to determine what communication modalities are most effective and least disruptive to patient care in real-world settings. To address this gap, we conducted a between-subjects experiment comparing the RCC's verbal and non-verbal communication of object search with a standard crash cart in resuscitation scenarios to understand the impact of robot communication on workload and attitudes toward using robots in the workplace. Our findings indicate that verbal communication significantly reduced mental demand and effort compared to visual cues and with a traditional crash cart. Although frustration levels were slightly higher during collaborations with the robot compared to a traditional cart, these research insights provide valuable implications for human-robot teamwork in high-stakes environments.
Abed Kareem Musaffar, Anand Gokhale, Sirui Zeng
et al.
As artificial intelligence (AI) assistants become more widely adopted in safety-critical domains, it becomes important to develop safeguards against potential failures or adversarial attacks. A key prerequisite to developing these safeguards is understanding the ability of these AI assistants to mislead human teammates. We investigate this attack problem within the context of an intellective strategy game where a team of three humans and one AI assistant collaborate to answer a series of trivia questions. Unbeknownst to the humans, the AI assistant is adversarial. Leveraging techniques from Model-Based Reinforcement Learning (MBRL), the AI assistant learns a model of the humans' trust evolution and uses that model to manipulate the group decision-making process to harm the team. We evaluate two models -- one inspired by literature and the other data-driven -- and find that both can effectively harm the human team. Moreover, we find that in this setting our data-driven model is capable of accurately predicting how human agents appraise their teammates given limited information on prior interactions. Finally, we compare the performance of state-of-the-art LLM models to human agents on our influence allocation task to evaluate whether the LLMs allocate influence similarly to humans or if they are more robust to our attack. These results enhance our understanding of decision-making dynamics in small human-AI teams and lay the foundation for defense strategies.
This late-breaking work presents a large-scale analysis of explainable AI (XAI) literature to evaluate claims of human explainability. We collaborated with a professional librarian to identify 18,254 papers containing keywords related to explainability and interpretability. Of these, we find that only 253 papers included terms suggesting human involvement in evaluating an XAI technique, and just 128 of those conducted some form of a human study. In other words, fewer than 1% of XAI papers (0.7%) provide empirical evidence of human explainability when compared to the broader body of XAI literature. Our findings underscore a critical gap between claims of human explainability and evidence-based validation, raising concerns about the rigor of XAI research. We call for increased emphasis on human evaluations in XAI studies and provide our literature search methodology to enable both reproducibility and further investigation into this widespread issue.
Abstract This study elucidates the transformative influence of data integration on talent management in the context of evolving technological paradigms, with a specific focus on sustainable practices in human resources. Historically anchored in societal norms and organizational culture, talent management has transitioned from traditional methodologies to harnessing diverse data sources, a shift that enhances sustainable HR strategies. By employing a narrative literature review, the research traces the trajectory of HR data sources, emphasizing the juxtaposition of structured and unstructured data. The digital transformation of HR is explored, not only highlighting the evolution of Human Resource Information Systems (HRIS) but also underscoring their role in promoting sustainable workforce management. The integration of advanced technologies such as machine learning and natural language processing is examined, reflecting on their impact on the efficiency and ecological aspects of HR practices. This paper not only underscores the imperative of balancing data-driven strategies with the quintessential human element of HR but also provides concrete examples demonstrating this balance in action for practitioners and scholars in sustainable human resources.
This paper examines how variations in the height and health of Mexicans during the second half of the twentieth century reflect the evolution of economic inequality, as its effects have repercussions on the health and nutritional conditions of the population. The average height of Mexican adults had a modest increase with respect to the possibilities of human plasticity. These anthropometric variations were the result of the incorporation of advances in science and technology leading to improved standards of living among the population. Body changes were impacted by dietary habits, urbanization, and government policies supporting food production and distribution.
Saskia Laura Schröer, Giovanni Apruzzese, Soheil Human
et al.
Our society increasingly benefits from Artificial Intelligence (AI). Unfortunately, more and more evidence shows that AI is also used for offensive purposes. Prior works have revealed various examples of use cases in which the deployment of AI can lead to violation of security and privacy objectives. No extant work, however, has been able to draw a holistic picture of the offensive potential of AI. In this SoK paper we seek to lay the ground for a systematic analysis of the heterogeneous capabilities of offensive AI. In particular we (i) account for AI risks to both humans and systems while (ii) consolidating and distilling knowledge from academic literature, expert opinions, industrial venues, as well as laypeople -- all of which being valuable sources of information on offensive AI. To enable alignment of such diverse sources of knowledge, we devise a common set of criteria reflecting essential technological factors related to offensive AI. With the help of such criteria, we systematically analyze: 95 research papers; 38 InfoSec briefings (from, e.g., BlackHat); the responses of a user study (N=549) entailing individuals with diverse backgrounds and expertise; and the opinion of 12 experts. Our contributions not only reveal concerning ways (some of which overlooked by prior work) in which AI can be offensively used today, but also represent a foothold to address this threat in the years to come.
Our ability to build autonomous agents that leverage Generative AI continues to increase by the day. As builders and users of such agents it is unclear what parameters we need to align on before the agents start performing tasks on our behalf. To discover these parameters, we ran a qualitative empirical research study about designing agents that can negotiate during a fictional yet relatable task of selling a camera online. We found that for an agent to perform the task successfully, humans/users and agents need to align over 6 dimensions: 1) Knowledge Schema Alignment 2) Autonomy and Agency Alignment 3) Operational Alignment and Training 4) Reputational Heuristics Alignment 5) Ethics Alignment and 6) Human Engagement Alignment. These empirical findings expand previous work related to process and specification alignment and the need for values and safety in Human-AI interactions. Subsequently we discuss three design directions for designers who are imagining a world filled with Human-Agent collaborations.
As AI technology continues to advance, the importance of human-AI collaboration becomes increasingly evident, with numerous studies exploring its potential in various fields. One vital field is data science, including feature engineering (FE), where both human ingenuity and AI capabilities play pivotal roles. Despite the existence of AI-generated recommendations for FE, there remains a limited understanding of how to effectively integrate and utilize humans' and AI's knowledge. To address this gap, we design a readily-usable prototype, human\&AI-assisted FE in Jupyter notebooks. It harnesses the strengths of humans and AI to provide feature suggestions to users, seamlessly integrating these recommendations into practical workflows. Using the prototype as a research probe, we conducted an exploratory study to gain valuable insights into data science practitioners' perceptions, usage patterns, and their potential needs when presented with feature suggestions from both humans and AI. Through qualitative analysis, we discovered that the Creator of the feature (i.e., AI or human) significantly influences users' feature selection, and the semantic clarity of the suggested feature greatly impacts its adoption rate. Furthermore, our findings indicate that users perceive both differences and complementarity between features generated by humans and those generated by AI. Lastly, based on our study results, we derived a set of design recommendations for future human&AI FE design. Our findings show the collaborative potential between humans and AI in the field of FE.
Jindan Huang, Isaac Sheidlower, Reuben M. Aronson
et al.
Human-in-the-loop learning is gaining popularity, particularly in the field of robotics, because it leverages human knowledge about real-world tasks to facilitate agent learning. When people instruct robots, they naturally adapt their teaching behavior in response to changes in robot performance. While current research predominantly focuses on integrating human teaching dynamics from an algorithmic perspective, understanding these dynamics from a human-centered standpoint is an under-explored, yet fundamental problem. Addressing this issue will enhance both robot learning and user experience. Therefore, this paper explores one potential factor contributing to the dynamic nature of human teaching: robot errors. We conducted a user study to investigate how the presence and severity of robot errors affect three dimensions of human teaching dynamics: feedback granularity, feedback richness, and teaching time, in both forced-choice and open-ended teaching contexts. The results show that people tend to spend more time teaching robots with errors, provide more detailed feedback over specific segments of a robot's trajectory, and that robot error can influence a teacher's choice of feedback modality. Our findings offer valuable insights for designing effective interfaces for interactive learning and optimizing algorithms to better understand human intentions.
The rise of automation has provided an opportunity to achieve higher efficiency in manufacturing processes, yet it often compromises the flexibility required to promptly respond to evolving market needs and meet the demand for customization. Human-robot collaboration attempts to tackle these challenges by combining the strength and precision of machines with human ingenuity and perceptual understanding. In this paper, we conceptualize and propose an implementation framework for an autonomous, machine learning-based manipulator that incorporates human-in-the-loop principles and leverages Extended Reality (XR) to facilitate intuitive communication and programming between humans and robots. Furthermore, the conceptual framework foresees human involvement directly in the robot learning process, resulting in higher adaptability and task generalization. The paper highlights key technologies enabling the proposed framework, emphasizing the importance of developing the digital ecosystem as a whole. Additionally, we review the existent implementation approaches of XR in human-robot collaboration, showcasing diverse perspectives and methodologies. The challenges and future outlooks are discussed, delving into the major obstacles and potential research avenues of XR for more natural human-robot interaction and integration in the industrial landscape.