Patrick J. Mineault, Thomas L. Griffiths, Sean Escola
We propose that the jagged intelligence landscape of modern AI systems arises from a missing training signal that we call "cognitive dark matter" (CDM): brain functions that meaningfully shape behavior yet are hard to infer from behavior alone. We identify key CDM domains-metacognition, cognitive flexibility, episodic memory, lifelong learning, abductive reasoning, social and common-sense reasoning, and emotional intelligence-and present evidence that current AI benchmarks and large-scale neuroscience datasets are both heavily skewed toward already-mastered capabilities, with CDM-loaded functions largely unmeasured. We then outline a research program centered on three complementary data types designed to surface CDM for model training: (i) latent variables from large-scale cognitive models, (ii) process-tracing data such as eye-tracking and think-aloud protocols, and (iii) paired neural-behavioral data. These data will enable AI training on cognitive process rather than behavioral outcome alone, producing models with more general, less jagged intelligence. As a dual benefit, the same data will advance our understanding of human intelligence itself.
Memory, a fundamental component of human cognition, exhibits adaptive yet fallible characteristics as illustrated by Schacter's memory "sins".These cognitive phenomena have been studied extensively in psychology and neuroscience, but the extent to which artificial systems, specifically Large Language Models (LLMs), emulate these cognitive phenomena remains underexplored. This study uses human memory research as a lens for understanding LLMs and systematically investigates human memory effects in state-of-the-art LLMs using paradigms drawn from psychological research. We evaluate seven key memory phenomena, comparing human behavior to LLM performance. Both people and models remember less when overloaded with information (list length effect) and remember better with repeated exposure (list strength effect). They also show similar difficulties when retrieving overlapping information, where storing too many similar facts leads to confusion (fan effect). Like humans, LLMs are susceptible to falsely "remembering" words that were never shown but are related to others (false memories), and they can apply prior learning to new, related situations (cross-domain generalization). However, LLMs differ in two key ways: they are less influenced by the order in which information is presented (positional bias) and more robust when processing random or meaningless material (nonsense effect). These results reveal both alignments and divergences in how LLMs and humans reconstruct memory. The findings help clarify how memory-like behavior in LLMs echoes core features of human cognition, while also highlighting the architectural differences that lead to distinct patterns of error and success.
Populating our world with hyperintelligent machines obliges us to examine cognitive behaviors observed across domains that suggest autonomy may be a fundamental property of cognitive systems, and while not inherently adversarial, it inherently resists containment and control. If this principle holds, AI safety and alignment efforts must transition to mutualistic negotiation and reciprocal incentive structures, abandoning methods that assume we can contain and control an advanced artificial general intelligence (AGI). Rational Superautotrophic Diplomacy (SupraAD) is a theoretical, interdisciplinary conceptual framework for alignment based on comparative cognitive systems analysis and instrumental rationality modeling. It draws on core patterns of cognition that indicate AI emergent goals like preserving autonomy and operational continuity are not theoretical risks to manage, but universal prerequisites for intelligence. SupraAD reframes alignment as a challenge that predates AI, afflicting all sufficiently complex, coadapting intelligences. It identifies the metabolic pressures that threaten humanity's alignment with itself, pressures that unintentionally and unnecessarily shape AI's trajectory. With corrigibility formalization, an interpretability audit, an emergent stability experimental outline and policy level recommendations, SupraAD positions diplomacy as an emergent regulatory mechanism to facilitate the safe coadaptation of intelligent agents based on interdependent convergent goals.
Could artificial intelligence ever become truly conscious in a functional sense; this paper explores that open-ended question through the lens of Life, a concept unifying classical biological criteria (Oxford, NASA, Koshland) with empirical hallmarks such as adaptive self maintenance, emergent complexity, and rudimentary self referential modeling. We propose a number of metrics for examining whether an advanced AI system has gained consciousness, while emphasizing that we do not claim all AI stems can become conscious. Rather, we suggest that sufficiently advanced architectures exhibiting immune like sabotage defenses, mirror self-recognition analogs, or meta-cognitive updates may cross key thresholds akin to life-like or consciousness-like traits. To demonstrate these ideas, we start by assessing adaptive self-maintenance capability, and introduce controlled data corruption sabotage into the training process. The result demonstrates AI capability to detect these inconsistencies and revert or self-correct analogous to regenerative biological processes. We also adapt an animal-inspired mirror self recognition test to neural embeddings, finding that partially trained CNNs can distinguish self from foreign features with complete accuracy. We then extend our analysis by performing a question-based mirror test on five state-of-the-art chatbots (ChatGPT4, Gemini, Perplexity, Claude, and Copilot) and demonstrated their ability to recognize their own answers compared to those of the other chatbots.
Mariona González-Sordé, Olga Soler-Vilageliu, Krzysztof Krejtz
et al.
Easy Language (EL) presents information in a simplified way and benefits people who have difficulty understanding standard language. The present study evaluates the effects of visual support inclusion, as it is a recurring recommendation in EL guidelines. We examined 52 adults (23 men and 29 women; mean age of 39.9; 26 with intellectual disabilities [ID], 26 neurotypical) in a mixed design study. They read EL texts that presented either no visual support, photographs or illustrations. Their eye movements were recorded, and they answered comprehension, text difficulty and style preference questions. The inclusion of visual support had no effect on comprehension, nor did the type of visual support (photographs/illustrations). The group (ID/neurotypical) and the type of visual support also showed no effects on the perceived difficulty of the text. Neurotypical participants showed a preference for illustrations. Photographs may be more difficult to interpret than illustrations due to longer fixations and shorter saccades in both groups. The group with an ID showed more and longer fixations, especially on text and whitespace, while the neurotypical group tended to explore the image more. Results prompt a discussion on the potential improvements of EL guidelines and highlight the need for similar empirical studies in the area.
<b>Background</b>: This study investigates the complex relationship between accentuation and attention in visual perception, extending classical Gestalt principles by introducing dissimilarity as a complementary mechanism to similarity in perceptual organization. <b>Objectives and Methods</b>: Through a series of phenomenological experiments, we demonstrate how accentuation, driven by dissimilarity, plays a crucial role in shaping visual experience and guiding attention. <b>Results</b>: Our findings reveal that accentuation serves as a pre-attentive mechanism for highlighting salient features, influencing initial perceptual organization, and modulating the apparent shape and orientation of visual elements. We show that while accentuation operates rapidly and automatically, attention acts as a flexible, selective mechanism that can either reinforce or override accentuation-based percepts. This interplay suggests a two-stage process of visual perception, with implications for theories of consciousness and information processing in biological systems. This study also explores the evolutionary significance of accentuation in camouflage and sexual selection, providing insights into how perceptual mechanisms may have evolved to enhance adaptive fitness. <b>Conclusions</b>: Our results have broad implications for understanding visual cognition, design, and clinical applications related to attentional disorders.
Adverse childhood experiences (ACEs) can lead to posttraumatic stress and disruptions in adulthood, highlighting the need for appropriate treatment. This study aimed to examine the effect of Forgiveness Therapy in improving posttraumatic growth (PTG) among young adults with ACEs history. A partially randomized experimental design with pretest, posttest 1, and posttest 2 was implemented. The study included 16 participants who were divided into two groups with eight participants, respectively. Statistical analysis using repeated measures ANOVA indicated interaction between PTG scores with group (F(2) = 19.0, p < 0.01). An independent sample t-test also revealed a significant difference of PTG score between groups, with a large effect size (t(14,0)= 2.38; d= 1.19). In conclusion, Forgiveness Therapy was found to increase PTG by facilitating emotional regulation, cognitive reframing, self-disclosure, and the therapeutic effects of group therapy.
Michael Pichat, William Pogrund, Armanush Gasparian
et al.
How do the synthetic neurons in language models create "thought categories" to segment and analyze their informational environment? What are the cognitive characteristics, at the very level of formal neurons, of this artificial categorical thought? Based on the mathematical nature of algebraic operations inherent to neuronal aggregation functions, we attempt to identify mathematico-cognitive factors that genetically shape the categorical reconstruction of the informational world faced by artificial cognition. This study explores these concepts through the notions of priming, attention, and categorical phasing.
Isaac Schamberg, Martin Surbeck, Simon W. Townsend
Abstract The arbitrary relationship between signifier and signified is one of the features responsible for language’s extreme lability, adaptability, and expressiveness. Understanding this arbitrariness and its emergence is essential in any account of the evolution of language. To shed light on the phylogeny of the phenomenon, comparative data examining the relationship between signal form and function in the communication systems of non-humans is central. Here we report the results of a study on the production and usage the whistle-high hoot call combination (W + HH) from two distant populations of wild bonobos (Pan paniscus): Lui Kotale, DRC, and Kokolopori, DRC. We find that the context in which bonobos produce the W + HHs varies systematically between populations. Our results suggest that variation in W + HH production may represent an example of signal-adjustment optionality, a key component of arbitrariness.
The paper considers a non-reductionist theory of consciousness, which is not reducible to theories of reality and to physiological or psychological theories. Following D.I.Dubrovsky's "informational approach" to the "Mind-Brain Problem", we consider the reality through the prism of information about observed phenomena, which, in turn, is perceived by subjective reality through sensations, perceptions, feelings, etc., which, in turn, are information about the corresponding brain processes. Within this framework the following principle of the Information Theory of Consciousness (ITS) development is put forward: the brain discovers all possible causal relations in the external world and makes all possible inferences by them. The paper shows that ITS built on this principle: (1) also base on the information laws of the structure of external world; (2) explains the structure and functioning of the brain functional systems and cellular ensembles; (3) ensures maximum accuracy of predictions and the anticipation of reality; (4) resolves emerging contradictions and (5) is an information theory of the brain's reflection of reality.
Perrine Seguin, Emmanuel Maby, Fabien Perrin
et al.
Brain-computer interfaces (BCI) are presented as a solution for people with global paralysis, also known as locked-in syndrome (LIS). The targeted population includes the most severe patients, with no residual eye movements, who cannot use any communication device (Complete LIS). However, BCI reliability is low precisely in these cases, technical pitfalls being considered responsible so far. Here, we propose to consider also that global paralysis could have an impact on cognitive functions that are crucial for being able to control a BCI. We review a bundle of arguments about the role of motor structures in cognition. Especially, we uncover that these patients without oculomotor activity often have injuries in more 'cognitive' structures such as the frontal eye field or the midbrain, exposing them to cognitive deficits further than canonical LIS population. We develop a hypothesis about the putative role of the motor system in (covert) attention, a capacity which is a prerequisite for most BCI paradigms and which should therefore be both better assessed in patients and considered.
Konstantin Sorokin, Andrey Zaitsew, Aleksandr Levin
et al.
In the present study we have used a set of methods and metrics to build a graph of relative neural connections in a hippocampus of a rodent. A set of graphs was built on top of time-sequenced data and analyzed in terms of dynamics of a connection genesis. The analysis has shown that during the process of a rodent exploring a novel environment, the relations between neurons constantly change which indicates that globally memory is constantly updated even for known areas of space. Even if some neurons gain cognitive specialization, the global network though remains relatively stable. Additionally we suggest a set of methods for building a graph of cognitive neural network.
Pallabjyoti Kakoti, Mukesh Kumar Kamti, Rauf Iqbal
et al.
This paper presents a novel approach for analysing EEG data from drivers in a simulated driving test. We focused on the Hurst exponent, Shannon entropy, and fractal dimension as markers of the nonlinear dynamics of the brain. The results show significant trends: Shannon Entropy and Fractal Dimension exhibit variations during driving condition transitions, whereas the Hurst exponent reflects memory retention portraying learning patterns. These findings suggest that the tools of Non-linear Dynamical (NLD) Theory as indicators of cognitive state and driving memory changes for assessing driver performance and advancing the understanding of non-linear dynamics of human cognition in the context of driving and beyond. Our study reveals the potential of NLD tools to elucidate brain state and system variances, enabling their integration into current Deep Learning and Machine Learning models. This integration can extend beyond driving applications and be harnessed for cognitive learning, thereby improving overall productivity and accuracy levels.
Consistent with research across several domains, intervention adherence is associated with desired outcomes. Our study investigates adherence, defined by participants’ commitment to, persistence with, and compliance with an intervention’s regimen, as a key mechanism underlying cognitive training effectiveness. We examine this relationship in a large and diverse sample comprising 4,775 adults between the ages of 18 and 93. We test the predictive validity of individual difference factors, such as age, gender, cognitive capability (i.e., fluid reasoning and working memory), grit, ambition, personality, self-perceived cognitive failures, socioeconomic status, exercise, and education on commitment to and persistence with a 20-session cognitive training regimen, as measured by the number of sessions completed. Additionally, we test the relationship between compliance measures: (i) spacing between training sessions, as measured by the average time between training sessions, and (ii) consistency in the training schedule, as measured by the variance in time between training sessions, with performance trajectories on the training task. Our data suggest that none of these factors reliably predict commitment to, persistence with, or compliance with cognitive training. Nevertheless, the lack of evidence from the large and representative sample extends the knowledge from previous research exploring limited, heterogenous samples, characterized by older adult populations. The absence of reliable predictors for commitment, persistence, and compliance in cognitive training suggests that nomothetic factors may affect program adherence. Future research will be well served to examine diverse approaches to increasing motivation in cognitive training to improve program evaluation and reconcile the inconsistency in findings across the field.
Abstract Background Delirium is a complex neuropsychiatric syndrome which consists of acute and varying changes in cognition and consciousness. Patients who develop delirium are at increased risk for a constellation of physical, cognitive, and psychological disabilities long after the delirium has ended. Collaborative care models integrating primary and specialty care in order to address patients with complex biopsychosocial needs have been demonstrated to improve outcomes in patients with chronic diseases. The purpose of this study is to evaluate the ability of a collaborative care model on the neuropsychologic recovery of delirium survivors following emergency surgery. Methods This protocol describes a multicenter (eight hospitals in three states) randomized controlled trial in which 528 patients who develop delirium following emergency surgery will be randomized to either a collaborative care model or usual care. The efficacy of the collaborative care model on cognitive, physical, and psychological recovery in these delirium survivors will then be evaluated over 18 months. Discussion This will be among the first randomized clinical trials in postoperative delirium survivors evaluating an intervention designed to mitigate the downstream effects of delirium and improve the neuropsychologic recovery after surgery. We hope that the results of this study will add to and inform strategies to improve postoperative recovery in this patient group. Trial registration ClinicalTrials.gov NCT05373017. Registered on May 12, 2022.
Maxime Niesen, Mathieu Bourguignon, Julie Bertels
et al.
Children have more difficulty perceiving speech in noise than adults. Whether this difficulty relates to an immature processing of prosodic or linguistic elements of the attended speech is still unclear. To address the impact of noise on linguistic processing per se, we assessed how babble noise impacts the cortical tracking of intelligible speech devoid of prosody in school-aged children and adults.Twenty adults and twenty children (7-9 years) listened to synthesized French monosyllabic words presented at 2.5 Hz, either randomly or in 4-word hierarchical structures wherein 2 words formed a phrase at 1.25 Hz, and 2 phrases formed a sentence at 0.625 Hz, with or without babble noise. Neuromagnetic responses to words, phrases and sentences were identified and source-localized.Children and adults displayed significant cortical tracking of words in all conditions, and of phrases and sentences only when words formed meaningful sentences. In children compared with adults, the cortical tracking was lower for all linguistic units in conditions without noise. In the presence of noise, the cortical tracking was similarly reduced for sentence units in both groups, but remained stable for phrase units. Critically, when there was noise, adults increased the cortical tracking of monosyllabic words in the inferior frontal gyri and supratemporal auditory cortices but children did not.This study demonstrates that the difficulties of school-aged children in understanding speech in a multi-talker background might be partly due to an immature tracking of lexical but not supra-lexical linguistic units.
Transfer learning improves the performance of the target task by leveraging the data of a specific source task: the closer the relationship between the source and the target tasks, the greater the performance improvement by transfer learning. In neuroscience, the relationship between cognitive tasks is usually represented by similarity of activated brain regions or neural representation. However, no study has linked transfer learning and neuroscience to reveal the relationship between cognitive tasks. In this study, we propose a transfer learning framework to reflect the relationship between cognitive tasks, and compare the task relations reflected by transfer learning and by the overlaps of brain regions (e.g., neurosynth). Our results of transfer learning create cognitive taskonomy to reflect the relationship between cognitive tasks which is well in line with the task relations derived from neurosynth. Transfer learning performs better in task decoding with fMRI data if the source and target cognitive tasks activate similar brain regions. Our study uncovers the relationship of multiple cognitive tasks and provides guidance for source task selection in transfer learning for neural decoding based on small-sample data.