Almost all organisms live in environments that have been altered, to some degree, by human activities. Because behaviour mediates interactions between an individual and its environment, the ability of organisms to behave appropriately under these new conditions is crucial for determining their immediate success or failure in these modified environments. While hundreds of species are suffering dramatically from these environmental changes, others, such as urbanized and pest species, are doing better than ever. Our goal is to provide insights into explaining such variation. We first summarize the responses of some species to novel situations, including novel risks and resources, habitat loss/fragmentation, pollutants and climate change. Using a sensory ecology approach, we present a mechanistic framework for predicting variation in behavioural responses to environmental change, drawing from models of decision‐making processes and an understanding of the selective background against which they evolved. Where immediate behavioural responses are inadequate, learning or evolutionary adaptation may prove useful, although these mechanisms are also constrained by evolutionary history. Although predicting the responses of species to environmental change is difficult, we highlight the need for a better understanding of the role of evolutionary history in shaping individuals’ responses to their environment and provide suggestion for future work.
Framework for Change Organic aerosols make up 20 to 90% of the particulate mass of the troposphere and are important factors in both climate and human heath. However, their sources and removal pathways are very uncertain, and their atmospheric evolution is poorly characterized. Jimenez et al. (p. 1525; see the Perspective by Andreae) present an integrated framework of organic aerosol compositional evolution in the atmosphere, based on model results and field and laboratory data that simulate the dynamic aging behavior of organic aerosols. Particles become more oxidized, more hygroscopic, and less volatile with age, as they become oxygenated organic aerosols. These results should lead to better predictions of climate and air quality. Organic aerosols are not compositionally static, but they evolve dramatically within hours to days of their formation. Organic aerosol (OA) particles affect climate forcing and human health, but their sources and evolution remain poorly characterized. We present a unifying model framework describing the atmospheric evolution of OA that is constrained by high–time-resolution measurements of its composition, volatility, and oxidation state. OA and OA precursor gases evolve by becoming increasingly oxidized, less volatile, and more hygroscopic, leading to the formation of oxygenated organic aerosol (OOA), with concentrations comparable to those of sulfate aerosol throughout the Northern Hemisphere. Our model framework captures the dynamic aging behavior observed in both the atmosphere and laboratory: It can serve as a basis for improving parameterizations in regional and global models.
Laura Lewis, Fumihiro Kano, Jeroen M. G. Stevens
et al.
Humans use proper names as vocal labels to identify and communicate with and about social agents. The comprehension of spoken proper names requires the ability to interpret socially specific verbal signals, or social vocal labels, and use cross-modal perception to identify and discriminate between group members. Individuals that recognize and comprehend familiar proper names can use these labels to identify and discriminate between groupmates, gain third-party knowledge, and guide decision-making. Use of vocal labels for conspecifics is noticeably rare in the animal kingdom, and has only been found in species (dolphins, elephants, and marmosets) that are phylogenetically distant from humans. We therefore investigated the phylogenetic trajectory of this capacity by studying our closest living primate relatives, chimpanzees (Pan troglodytes) and bonobos (Pan paniscus). We implemented a cross-modal non-invasive eye-tracking and playback study with multiple populations of apes (N = 24) living in zoos and sanctuaries, none of whom were specifically language-trained. We tested whether chimpanzees and bonobos spontaneously attend toward an image of a groupmate whose name has been called by a human caretaker. We found limited evidence that apes link the caretaker-given names of their groupmates to images on a screen, and therefore cannot make strong conclusions about apes’ comprehension of these social vocal labels. Our playback and eye-tracking paradigm offers a novel tool for studying cross-modal perception and knowledge of vocal labels. Future work will be critical to identify the sociocognitive foundations underlying socially specific referential communication and the evolution of language.
In real-world collaboration, alignment, process structure, and outcome quality do not exhibit a simple linear or one-to-one correspondence: similar alignment may accompany either rapid convergence or extensive multi-branch exploration, and lead to different results. Existing accounts often isolate these dimensions or focus on specific participant types, limiting structural accounts of collaboration. We reconceptualize collaboration through two complementary lenses. The task lens models collaboration as trajectory evolution in a structured task space, revealing patterns such as advancement, branching, and backtracking. The intent lens examines how individual intents are expressed within shared contexts and enter situated decisions. Together, these lenses clarify the structural relationships among alignment, decision-making, and trajectory structure. Rather than reducing collaboration to outcome quality or treating alignment as the sole objective, we propose a unified dynamic view of the relationships among alignment, process, and outcome, and use it to re-examine collaboration structure across Human-Human, AI-AI, and Human-AI settings.
Tailia Malloy, Maria Jose Ferreira, Fei Fang
et al.
In real-world decision making, outcomes are often delayed, meaning individuals must make multiple decisions before receiving any feedback. Moreover, feedback can be presented in different ways: it may summarize the overall results of multiple decisions (aggregated feedback) or report the outcome of individual decisions after some delay (clustered feedback). Despite its importance, the timing and presentation of delayed feedback has received little attention in cognitive modeling of decision-making, which typically focuses on immediate feedback. To address this, we conducted an experiment to compare the effect of delayed vs. immediate feedback and aggregated vs. clustered feedback. We also propose a Hierarchical Instance-Based Learning (HIBL) model that captures how people make decisions in delayed feedback settings. HIBL uses a super-model that chooses between sub-models to perform the decision-making task until an outcome is observed. Simulations show that HIBL best predicts human behavior and specific patterns, demonstrating the flexibility of IBL models.
Studies of human-robot interaction in dynamic and unstructured environments show that as more advanced robotic capabilities are deployed, the need for cooperative competencies to support collaboration with human problem-holders increases. Designing human-robot systems to meet these demands requires an explicit understanding of the work functions and constraints that shape the feasibility of alternative joint work strategies. Yet existing human-robot interaction frameworks either emphasize computational support for real-time execution or rely on static representations for design, offering limited support for reasoning about coordination dynamics during early-stage conceptual design. To address this gap, this article presents a novel computational framework for analyzing joint work strategies in human-robot systems by integrating techniques from functional modeling with graph-theoretic representations. The framework characterizes collective work in terms of the relationships among system functions and the physical and informational structure of the work environment, while explicitly capturing how coordination demands evolve over time. Its use during conceptual design is demonstrated through a case study in disaster robotics, which shows how the framework can be used to support early trade-space exploration of human-robot coordination strategies and to identify cooperative competencies that support flexible management of coordination overhead. These results show how the framework makes coordination demands and their temporal evolution explicit, supporting design-time reasoning about cooperative competency requirements and work demands prior to implementation.
High-stakes decision domains are increasingly exploring the potential of Large Language Models (LLMs) for complex decision-making tasks. However, LLM deployment in real-world settings presents challenges in data security, evaluation of its capabilities outside controlled environments, and accountability attribution in the event of adversarial decisions. This paper proposes a framework for responsible deployment of LLM-based decision-support systems through active human involvement. It integrates interactive collaboration between human experts and developers through multiple iterations at the pre-deployment stage to assess the uncertain samples and judge the stability of the explanation provided by post-hoc XAI techniques. Local LLM deployment within organizations and decentralized technologies, such as Blockchain and IPFS, are proposed to create immutable records of LLM activities for automated auditing to enhance security and trace back accountability. It was tested on Bert-large-uncased, Mistral, and LLaMA 2 and 3 models to assess the capability to support responsible financial decisions on business lending.
Eleni Straitouri, Stratis Tsirtsis, Ander Artola Velasco
et al.
Recent work has shown that, in classification tasks, it is possible to design decision support systems that do not require human experts to understand when to cede agency to a classifier or when to exercise their own agency to achieve complementarity$\unicode{x2014}$experts using these systems make more accurate predictions than those made by the experts or the classifier alone. The key principle underpinning these systems reduces to adaptively controlling the level of human agency, by design. Can we use the same principle to achieve complementarity in sequential decision making tasks? In this paper, we answer this question affirmatively. We develop a decision support system that uses a pre-trained AI agent to narrow down the set of actions a human can take to a subset, and then asks the human to take an action from this action set. Along the way, we also introduce a bandit algorithm that leverages the smoothness properties of the action sets provided by our system to efficiently optimize the level of human agency. To evaluate our decision support system, we conduct a large-scale human subject study ($n = 1{,}600$) where participants play a wildfire mitigation game. We find that participants who play the game supported by our system outperform those who play on their own by $\sim$$30$% and the AI agent used by our system by $>$$2$%, even though the AI agent largely outperforms participants playing without support. We have made available the data gathered in our human subject study as well as an open source implementation of our system at https://github.com/Networks-Learning/narrowing-action-choices .
Traditional image annotation tasks rely heavily on human effort for object selection and label assignment, making the process time-consuming and prone to decreased efficiency as annotators experience fatigue after extensive work. This paper introduces a novel framework that leverages the visual understanding capabilities of large multimodal models (LMMs), particularly GPT, to assist annotation workflows. In our proposed approach, human annotators focus on selecting objects via bounding boxes, while the LMM autonomously generates relevant labels. This human-AI collaborative framework enhances annotation efficiency by reducing the cognitive and time burden on human annotators. By analyzing the system's performance across various types of annotation tasks, we demonstrate its ability to generalize to tasks such as object recognition, scene description, and fine-grained categorization. Our proposed framework highlights the potential of this approach to redefine annotation workflows, offering a scalable and efficient solution for large-scale data labeling in computer vision. Finally, we discuss how integrating LMMs into the annotation pipeline can advance bidirectional human-AI alignment, as well as the challenges of alleviating the "endless annotation" burden in the face of information overload by shifting some of the work to AI.
Paula Akemi Aoyagui, Kelsey Stemmler, Sharon Ferguson
et al.
In subjective decision-making, where decisions are based on contextual interpretation, Large Language Models (LLMs) can be integrated to present users with additional rationales to consider. The diversity of these rationales is mediated by the ability to consider the perspectives of different social actors. However, it remains unclear whether and how models differ in the distribution of perspectives they provide. We compare the perspectives taken by humans and different LLMs when assessing subtle sexism scenarios. We show that these perspectives can be classified within a finite set (perpetrator, victim, decision-maker), consistently present in argumentations produced by humans and LLMs, but in different distributions and combinations, demonstrating differences and similarities with human responses, and between models. We argue for the need to systematically evaluate LLMs' perspective-taking to identify the most suitable models for a given decision-making task. We discuss the implications for model evaluation.
Sean Driscoll, Fjodor Merkuri, Frédéric J. J. Chain
et al.
Abstract Modifications to highly conserved developmental gene regulatory networks are thought to underlie morphological diversification in evolution and contribute to human congenital malformations. Relationships between gene expression and morphology have been extensively investigated in the limb, where most of the evidence for alterations to gene regulation in development consists of pre-transcriptional mechanisms that affect expression levels, such as epigenetic alterations to regulatory sequences and changes to cis-regulatory elements. Here we report evidence that alternative splicing (AS), a post-transcriptional process that modifies and diversifies mRNA transcripts, is dynamic during limb development in two mammalian species. We evaluated AS patterns in mouse (Mus musculus) and opossum (Monodelphis domestica) across the three key limb developmental stages: the ridge, bud, and paddle. Our data show that splicing patterns are dynamic over developmental time and suggest differences between the two mammalian taxa. Additionally, multiple key limb development genes, including Fgf8, are differentially spliced across the three stages in both species, with expression levels of the conserved splice variants, Fgf8a and Fgf8b, changing across developmental time. Our data demonstrates that AS is a critical mediator of mRNA diversity in limb development and provides an additional mechanism for evolutionary tweaking of gene dosage.
Abstract This study explores the development and global landscape of multimodal teaching research from 1995 to 2023, focusing on influential contributors, thematic trends, and emerging research directions. Aiming to provide a comprehensive understanding of the field’s evolution and future potential, this study addresses three core research questions: (1) How has research in multimodal teaching progressed over time? (2) What are the primary topics and current concerns in multimodal teaching research? (3) What theoretical and practical implications arise from these findings, and what opportunities exist for future exploration? Employing a mixed-methods approach with bibliometric and content analysis, 689 articles were analyzed, revealing significant growth in research output, particularly since 2016. Analysis using CiteSpace identified major contributors, including Nanyang Technological University and the State University System of Florida, with the United States, China, and Australia leading in publication volume. Prominent research themes include augmented reality, cognitive load, early childhood education, and multimedia human-computer interaction, reflecting an increasing focus on technology-enhanced learning environments. This study not only highlights the current trends in multimodal teaching but also proposes a conceptual framework and future research directions, offering valuable insights for the adaptation of educational practices in increasingly digital and multimodal contexts.
History of scholarship and learning. The humanities, Social Sciences
Abstract In response to concerns regarding numerous complex issues facing the veterinary specialty profession, several organizations, including the American College of Veterinary Internal Medicine, have made a clarion call to the American Veterinary Medical Association to begin discussions surrounding the formation of an accrediting body for internships, residencies, and fellowships. A proposed name for such a body is the Accreditation Council on Graduate Veterinary Medical Education, in alignment with the Accreditation Council on Graduate Medical Education (ACGME); the term “graduate” refers to specialty education that occurs after the first 4 years of the MD or DVM degree. Although the structure and financing of graduate education differ between the human medical and veterinary professions, we can nevertheless learn much from the history of evolution of human medical specialization as we navigate the path ahead.