This paper proposes a conceptual framework in which intelligence and consciousness emerge from relational structure rather than from prediction or domain-specific mechanisms. Intelligence is defined as the capacity to form and integrate causal connections between signals, actions, and internal states. Through context enrichment, systems interpret incoming information using learned relational structure that provides essential context in an efficient representation that the raw input itself does not contain, enabling efficient processing under metabolic constraints. Building on this foundation, we introduce the systems-explaining-systems principle, where consciousness emerges when recursive architectures allow higher-order systems to learn and interpret the relational patterns of lower-order systems across time. These interpretations are integrated into a dynamically stabilized meta-state and fed back through context enrichment, transforming internal models from representations of the external world into models of the system's own cognitive processes. The framework reframes predictive processing as an emergent consequence of contextual interpretation rather than explicit forecasting and suggests that recursive multi-system architectures may be necessary for more human-like artificial intelligence.
Derek Shiller, Laura Duffy, Arvo Muñoz Morán
et al.
Artificially intelligent systems have become remarkably sophisticated. They hold conversations, write essays, and seem to understand context in ways that surprise even their creators. This raises a crucial question: Are we creating systems that are conscious? The Digital Consciousness Model (DCM) is a first attempt to assess the evidence for consciousness in AI systems in a systematic, probabilistic way. It provides a shared framework for comparing different AIs and biological organisms, and for tracking how the evidence changes over time as AI develops. Instead of adopting a single theory of consciousness, it incorporates a range of leading theories and perspectives - acknowledging that experts disagree fundamentally about what consciousness is and what conditions are necessary for it. This report describes the structure and initial results of the Digital Consciousness Model. Overall, we find that the evidence is against 2024 LLMs being conscious, but the evidence against 2024 LLMs being conscious is not decisive. The evidence against LLM consciousness is much weaker than the evidence against consciousness in simpler AI systems.
Theories of consciousness depend on data, and it needs to be appropriate data, without overwhelming confounding factors. The reports of Minimal Phenomenal Experience (MPE) in [Metzinger 2024] relate to consciousness in a state purer than everyday consciousness, which may have fewer confounding factors. This essay suggests that the confounding factors, which are absent or diminished in MPE states, are related to language. The self which is absent in mindful states is a product of language. The link between language and MPE states is demonstrated by reference to the phenomenal reports in [Metzinger 2024]. Language, emotion, and mindfulness are analysed in terms of Bayesian pattern matching, or equivalently minimisation of Free Energy, using three types of pattern which are specific to humans. These types are the word patterns of language, self-patterns which drive our emotions and which are also a part of language, and mindful patterns. The practice of mindfulness involves learning mindful patterns, which compete with self-patterns and displace them, allowing mindful states to occur. Consequences of this picture for theories of consciousness, and their relation to MPE states, are explored.
While Multimodal Large Language Models (MLLMs) are adept at answering what is in an image-identifying objects and describing scenes-they often lack the ability to understand how an image feels to a human observer. This gap is most evident when considering subjective cognitive properties, such as what makes an image memorable, funny, aesthetically pleasing, or emotionally evocative. To systematically address this challenge, we introduce CogIP-Bench, a comprehensive benchmark for evaluating MLLMs on such image cognitive properties. Our evaluation reveals a significant gap: current models are poorly aligned with human perception of these nuanced properties. We then demonstrate that a post-training phase can effectively bridge this gap, significantly enhancing the model's alignment with human judgments. Furthermore, we show that this learned cognitive alignment is not merely predictive but also transferable to downstream creative tasks. By integrating our cognitively-aligned MLLM into an image generation pipeline, we can guide the synthesis process to produce images that better embody desired traits, such as being more memorable or visually appealing. Our work provides a benchmark to measure this human-like perception, a post-training pipeline to enhance it, and a demonstration that this alignment unlocks more human-centric AI.
In AI, the existential risk denotes the hypothetical threat posed by an artificial system that would possess both the capability and the objective, either directly or indirectly, to eradicate humanity. This issue is gaining prominence in scientific debate due to recent technical advancements and increased media coverage. In parallel, AI progress has sparked speculation and studies about the potential emergence of artificial consciousness. The two questions, AI consciousness and existential risk, are sometimes conflated, as if the former entailed the latter. Here, I explain that this view stems from a common confusion between consciousness and intelligence. Yet these two properties are empirically and theoretically distinct. Arguably, while intelligence is a direct predictor of an AI system's existential threat, consciousness is not. There are, however, certain incidental scenarios in which consciousness could influence existential risk, in either direction. Consciousness could be viewed as a means towards AI alignment, thereby lowering existential risk; or, it could be a precondition for reaching certain capabilities or levels of intelligence, and thus positively related to existential risk. Recognizing these distinctions can help AI safety researchers and public policymakers focus on the most pressing issues.
Acrophobia has traditionally been treated using exposure therapy; however, virtual reality technology has emerged as an alternative that minimizing security risks by presenting three-dimensional stimuli. This study aimed to investigate virtual reality exposure therapy-cognitive restructuring (VRET-CR) effectiveness in reducing acrophobia symptoms. In a pretest-posttest control group design, 27 participants were randomly assigned to the experimental group (n=13) and the control group (n=14). An independent sample t-test revealed a significant differences in the gain scores of the acrophobia questionnaire (AQ) 1 [t (17.08) = -6.173; p <0.05] and AQ 2 [t (25) = -4.250; p <0.05] between these groups. Scores on the State-Trait Anxiety Inventory (STAI) and Autonomic Perception Questionnaire (APQ) decreased after six exposure sessions, supporting these findings. Skin conductance and respiratory rate changes during therapy were less significant than heart rate changes. Overall, the results demonstrated the effectiveness of VRET-CR in reducing acrophobia symptoms.
Dr. Isidro E. Méndez Santos, Dra. Bárbara M. Carvajal Hernández
Methods: Theoretical methods such as analytical-synthetic, inductive-deductive, historical-logical and ascension from the abstract to the concrete were used to assess information from the bibliographic sources consulted. Empirical techniques connected to document analysis allows to collect the best university teaching experiences in leading the study of these topics for more than 30.
Results: The importance of the category system of the theory of Santiago (autopoiesis, structural coupling, cognition, behavior, nervous system, behavioral coordination, communication, cultural behavior, language, self-consciousness and reflection, among others), is highlighted to unveil the essence of the subjects involved in the educational process and the qualities that make their education possible.
Conclusions: The theory of Santiago provides theoretical assumptions that help to explain a large part of the biological foundation that underlies education. Not taking them into account would mean ignoring the living essence of the participants in the process and would constitute, among other things, an ontological, gnoseological and methodological error.
Christopher J. Whyte, Andrew W. Corcoran, Jonathan Robinson
et al.
The multifaceted nature of subjective experience poses a challenge to the study of consciousness. Traditional neuroscientific approaches often concentrate on isolated facets, such as perceptual awareness or the global state of consciousness and construct a theory around the relevant empirical paradigms and findings. Theories of consciousness are, therefore, often difficult to compare; indeed, there might be little overlap in the phenomena such theories aim to explain. Here, we take a different approach: starting with active inference, a first principles framework for modelling behaviour as (approximate) Bayesian inference, and building up to a minimal theory of consciousness, which emerges from the shared features of computational models derived under active inference. We review a body of work applying active inference models to the study of consciousness and argue that there is implicit in all these models a small set of theoretical commitments that point to a minimal (and testable) theory of consciousness.
Computational functionalism posits that consciousness is a computation. Here we show, perhaps surprisingly, that it cannot be a Turing computation. Rather, computational functionalism implies that consciousness is a novel type of computation that has recently been proposed by Geoffrey Hinton, called mortal computation.
Angus R. Teece, Martyn Beaven, Christos K. Argus
et al.
Objective To evaluate the differences in subjective sleep quality, quantity, and behaviors among male and female elite rugby union athletes through two common sleep questionnaires.
Patrick Butlin, Robert Long, Eric Elmoznino
et al.
Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness. We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories we derive "indicator properties" of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.
Consciousness is a sequential process of awareness which can focus on one piece of information at a time. This process of awareness experiences causation which underpins the notion of time while it interplays with matter and energy, forming reality. The study of Consciousness, time and reality is complex and evolving fast in many fields, including metaphysics and fundamental physics. Reality composes patterns in human Consciousness in response to the regularities in nature. These regularities could be physical (e.g., astronomical, environmental), biological, chemical, mental, social, etc. The patterns that emerged in Consciousness were correlated to the environment, life and social behaviours followed by constructed frameworks, systems and structures. The complex constructs evolved as cultures, customs, norms and values, which created a diverse society. In the evolution of responsible AI, it is important to be attuned to the evolved cultural, ethical and moral values through Consciousness. This requires the advocated design of self-learning AI aware of time perception and human ethics.
The way we view the reality of nature, including ourselves, depend on consciousness.It also defines the identity of the person, since we know people in terms of their experiences. In general, consciousness defines human existence in this universe. Furthermore, consciousness is associated with the most debated problems in physics such as the notion of observation, observer,in the measurement problem. However,its nature, occurrence mechanism in the brain and the definite universal locality of the consciousness are not clearly known. Due to this consciousness is considered asan essential unresolved scientific problem of the current era.Here, we review the physical processes which are associated in tackling these challenges. Firstly, we discuss the association of consciousness with transmission of signals in the brain, chain of events, quantum phenomena process and integrated information. We also highlight the roles of structure of matter,field, and the concept of universality towards understanding consciousness. Finally, we propose further studies for achieving better understanding of consciousness.