Quantum foundations for quantum technologies in the International Year of Quantum (2025)
Angelo Bassi
From the very beginning, Quantum Mechanics has been accompanied by crucial foundational questions: the possibility of visualizing physical processes, the limits of measurement epitomized by the Heisenberg uncertainty principle, the existence of a deeper underlying reality with additional degrees of freedom, the role of measurements, and the status of locality. Long regarded as philosophical speculations, these issues were progressively reformulated into precise mathematical statements and ultimately subjected to experimental verification. The trajectory proved unpredictable: questions once dismissed as metaphysical gave rise to experimental platforms, which in turn matured into devices and technologies powering quantum computation, communication, and sensing. Yet this development is not unidirectional: advances in technology also feed back into foundations, enabling tests of principles that were previously out of reach, for example, whether quantum superposition persists at larger and larger scales and whether reality, gravity included, is fundamentally quantum. In this way, the dialogue between foundational inquiry and technological progress continues to shape both our theoretical understanding and the practical realization of quantum phenomena.
The Causal Second Law
Balazs Gyenis
I argue that if a special science satisfies certain key assumptions that are familiar from physicalist accounts of the special sciences and from physics, then its causal regularities have an associated notion of entropy, and that this causal entropy cannot decrease from a robust cause to its effect. Due to its analogy with the second laws of thermodynamics and statistical physics, I call the latter conclusion the causal second law. In this paper, I clarify the key assumptions, prove the causal second law, give sufficient conditions for causal entropy increase, relate the causal second law to statistical mechanics and thermodynamics, and argue that the reversibility objection does not threaten it. In addition, I claim that the causal second law is compatible with a non-metaphysical understanding of supervenience and the open systems view, argue that it does not imply a causal time arrow, reflect on relaxing the robustness condition, question whether it is necessary to invoke thermodynamics to show that special sciences' time arrows exist, and discuss a transition-relative-frequency-based, special-science-internal characterization of causal regularities.
Wanting to Be Understood Explains the Meta-Problem of Consciousness
Chrisantha Fernando, Dylan Banarse, Simon Osindero
Because we are highly motivated to be understood, we created public external representations -- mime, language, art -- to externalise our inner states. We argue that such external representations are a pre-condition for access consciousness, the global availability of information for reasoning. Yet the bandwidth of access consciousness is tiny compared with the richness of `raw experience', so no external representation can reproduce that richness in full. Ordinarily an explanation of experience need only let an audience `grasp' the relevant pattern, not relive the phenomenon. But our drive to be understood, and our low level sensorimotor capacities for `grasping' so rich, that the demand for an explanation of the feel of experience cannot be ``satisfactory''. That inflated epistemic demand (the preeminence of our expectation that we could be perfectly understood by another or ourselves) rather than an irreducible metaphysical gulf -- keeps the hard problem of consciousness alive. But on the plus side, it seems we will simply never give up creating new ways to communicate and think about our experiences. In this view, to be consciously aware is to strive to have one's agency understood by oneself and others.
Is quantum mechanics merely a theory for us?
Peter W. Evans
This paper develops an agent-centric account of measurement that treats the preferred-basis problem is fundamentally perspectival. On this view, the system--apparatus--environment decomposition and the observables that are apt to become classically robust are determined by the physical constitution and epistemic constraints of an embodied class of agents. Decoherence then stabilises those agent-specified observables, yielding facts that are stable for us without positing an absolute, observer-independent basis. On this picture, `measurements' are public not because they are metaphysically privileged, but because agents like us share the relevant sensorimotor and operational structure. I motivate this account through a discussion of two recent no-go results for relational quantum mechanics (RQM) (Brukner,2021;Pienaar,2021), and a subsequent response (DiBiagio and Rovelli, 2022): my aim is not to defend RQM per se, but to refine the relational insight with a principled account of basis selection rooted in embodiment. I provide a phenomenological gloss, drawing on body-schema considerations, to argue that quantum mechanics is best understood as an idiosyncratically human description of interactions with the physical world -- a structurally constrained, agent-indexed framework within which classicality emerges.
From Hamilton-Jacobi to Bohm: Why the Wave Function Isn't Just Another Action
Arnaud Amblard, Aurélien Drezet
This paper examines the physical meaning of the wave function in Bohmian mechanics (BM), addressing the debate between causal and nomological interpretations. While BM postulates particles with definite trajectories guided by the wave function, the ontological status of the wave function itself remains contested. Critics of the causal interpretation argue that the wave function's high-dimensionality and lack of back-reaction disqualify it as a physical entity. Proponents of the nomological interpretation, drawing parallels to the classical Hamiltonian, propose that the wave function is a "law-like" entity. However, this view faces challenges, including reliance on speculative quantum gravity frameworks (e.g., the Wheeler-DeWitt equation) and conceptual ambiguities about the nature of "nomological entities". By systematically comparing BM to Hamilton-Jacobi theory, this paper highlights disanalogies between the wave function and the classical action function. These differences, particularly the wave function's dynamical necessity and irreducibility, support a sui generis interpretation, where the wave function represents a novel ontological category unique to quantum theory. The paper concludes that the wave function's role in BM resists classical analogies, demanding a metaphysical framework that accommodates its non-local, high-dimensional, and dynamically irreducible nature.
en
quant-ph, physics.hist-ph
A Pragmatic View of AI Personhood
Joel Z. Leibo, Alexander Sasha Vezhnevets, William A. Cunningham
et al.
The emergence of agentic Artificial Intelligence (AI) is set to trigger a "Cambrian explosion" of new kinds of personhood. This paper proposes a pragmatic framework for navigating this diversification by treating personhood not as a metaphysical property to be discovered, but as a flexible bundle of obligations (rights and responsibilities) that societies confer upon entities for a variety of reasons, especially to solve concrete governance problems. We argue that this traditional bundle can be unbundled, creating bespoke solutions for different contexts. This will allow for the creation of practical tools -- such as facilitating AI contracting by creating a target "individual" that can be sanctioned -- without needing to resolve intractable debates about an AI's consciousness or rationality. We explore how individuals fit in to social roles and discuss the use of decentralized digital identity technology, examining both "personhood as a problem", where design choices can create "dark patterns" that exploit human social heuristics, and "personhood as a solution", where conferring a bundle of obligations is necessary to ensure accountability or prevent conflict. By rejecting foundationalist quests for a single, essential definition of personhood, this paper offers a more pragmatic and flexible way to think about integrating AI agents into our society.
Perfect AI Mimicry and the Epistemology of Consciousness: A Solipsistic Dilemma
Shurui Li
Rapid advances in artificial intelligence necessitate a re-examination of the epistemological foundations upon which we attribute consciousness. As AI systems increasingly mimic human behavior and interaction with high fidelity, the concept of a "perfect mimic"-an entity empirically indistinguishable from a human through observation and interaction-shifts from hypothetical to technologically plausible. This paper argues that such developments pose a fundamental challenge to the consistency of our mind-recognition practices. Consciousness attributions rely heavily, if not exclusively, on empirical evidence derived from behavior and interaction. If a perfect mimic provides evidence identical to that of humans, any refusal to grant it equivalent epistemic status must invoke inaccessible factors, such as qualia, substrate requirements, or origin. Selectively invoking such factors risks a debilitating dilemma: either we undermine the rational basis for attributing consciousness to others (epistemological solipsism), or we accept inconsistent reasoning. I contend that epistemic consistency demands we ascribe the same status to empirically indistinguishable entities, regardless of metaphysical assumptions. The perfect mimic thus acts as an epistemic mirror, forcing critical reflection on the assumptions underlying intersubjective recognition in light of advancing AI. This analysis carries significant implications for theories of consciousness and ethical frameworks concerning artificial agents.
What is Stochastic Supervenience?
Youheng Zhang
Standard formulations of supervenience typically treat higher level properties as point valued facts strictly fixed by underlying base states. However, in many scientific domains, from statistical mechanics to machine learning, basal structures more naturally determine families of probability measures than single outcomes. This paper develops a general framework for stochastic supervenience, in which the dependence of higher level structures on a physical base is represented by Markov kernels that map base states to distributions over macro level configurations. I formulate axioms that secure law like fixation, nondegeneracy, and directional asymmetry, and show that classical deterministic supervenience appears as a limiting Dirac case within the resulting topological space of dependence relations. To connect these metaphysical claims with empirical practice, the framework incorporates information theoretic diagnostics, including normalized mutual information, divergence based spectra, and measures of tail sensitivity. These indices are used to distinguish genuine structural stochasticity from merely epistemic uncertainty, to articulate degrees of distributional multiple realization, and to identify macro level organizations that are salient for intervention. The overall project offers a conservative extension of physicalist dependence that accommodates pervasive structured uncertainty in the special sciences without abandoning the priority of the base level.
en
physics.hist-ph, quant-ph
Disruptive Paradox: Deconstructive Architecture and its Subversive Power
Jennifer Konrad
Throughout art history, disruption has been a deliberate tool for conveying meaning. In architecture, deviations from norms provoke reflection and challenge principles like Vitruvius’ firmitas, utilitas, and venustas. From the 1980s on, deconstructivist architects systematically used disruption to express Jacques Derrida’s concept of deconstruction through form, space, and perspective. Though buildings are not texts, this movement questioned architectural and societal norms. This article explores how deconstructivist architecture functions as a reflective medium, radically challenging political, social, and aesthetic structures. Disruption, as theorized among others by Lars Koch and Tobias Nanz, acts as both a destructive and productive force. Architects like Peter Eisenman, Bernard Tschumi, and Daniel Libeskind integrated Derrida’s philosophy into their work, exposing architecture as the “last fortress of metaphysics”—an illusion of stability masking its own constructed nature. Their buildings reveal hidden structures and produce an ambiguity of many possible orders and norms without referring to one of them. By employing disruption as a subversive tool, deconstructivism bridged architecture and philosophy, provoking critical reflection on the built environment.
Language and Literature, Social sciences (General)
The Copernican Argument for Alien Consciousness; The Mimicry Argument Against Robot Consciousness
Eric Schwitzgebel, Jeremy Pober
On broadly Copernican grounds, we are entitled to assume that apparently behaviorally sophisticated extraterrestrial entities ("aliens") would be conscious. Otherwise, we humans would be inexplicably, implausibly lucky to have consciousness, while similarly behaviorally sophisticated entities elsewhere would be mere shells, devoid of consciousness. However, this Copernican default assumption is canceled in the case of behaviorally sophisticated entities designed to mimic superficial features associated with consciousness ("consciousness mimics"), and in particular a broad class of current, near-future, and hypothetical robots. These considerations, which we formulate, respectively, as the Copernican and Mimicry Arguments, jointly defeat an otherwise potentially attractive parity principle, according to which we should apply the same types of behavioral or cognitive tests to aliens and robots, attributing or denying consciousness similarly to the extent they perform similarly. Our approach is unusual in the following respect: Instead of grounding speculations about alien and robot consciousness in a particular metaphysical or scientific theory about the physical or functional bases of consciousness, we appeal directly to the epistemic principles of Copernican mediocrity and inference to the best explanation.
Masked Conditional Diffusion Models for Image Analysis with Application to Radiographic Diagnosis of Infant Abuse
Shaoju Wu, Sila Kurugol, Andy Tsai
The classic metaphyseal lesion (CML) is a distinct injury that is highly specific for infant abuse. It commonly occurs in the distal tibia. To aid radiologists detect these subtle fractures, we need to develop a model that can flag abnormal distal tibial radiographs (i.e. those with CMLs). Unfortunately, the development of such a model requires a large and diverse training database, which is often not available. To address this limitation, we propose a novel generative model for data augmentation. Unlike previous models that fail to generate data that span the diverse radiographic appearance of the distal tibial CML, our proposed masked conditional diffusion model (MaC-DM) not only generates realistic-appearing and wide-ranging synthetic images of the distal tibial radiographs with and without CMLs, it also generates their associated segmentation labels. To achieve these tasks, MaC-DM combines the weighted segmentation masks of the tibias and the CML fracture sites as additional conditions for classifier guidance. The augmented images from our model improved the performances of ResNet-34 in classifying normal radiographs and those with CMLs. Further, the augmented images and their associated segmentation masks enhanced the performance of the U-Net in labeling areas of the CMLs on distal tibial radiographs.
A Higher Dimension of Consciousness: Constructing an empirically falsifiable panpsychist model of consciousness
Jacob Jolij
Panpsychism is a solution to the mind-body problem that presumes that consciousness is a fundamental aspect of reality instead of a product or consequence of physical processes (i.e., brain activity). Panpsychism is an elegant solution to the mind-body problem: it effectively rids itself of the explanatory gap materialist theories of consciousness suffer from. However, many theorists and experimentalists doubt panpsychism can ever be successful as a scientific theory, as it cannot be empirically verified or falsified. In this paper, I present a panpsychist model based on the controversial idea that consciousness may be a so called higher physical dimension. Although this notion seems outrageous, I show that the idea has surprising explanatory power, even though the model (as most models) is most likely wrong. Most importantly, though, it results in a panpsychist model that yields predictions that can be empirically verified or falsified. As such, the model's main purpose is to serve as an example how a metaphysical model of consciousness can be specified in such a way that they can be tested in a scientifically rigorous way.
en
q-bio.NC, physics.hist-ph
Curb Your Self-Modifying Code
Patrik Christen
Self-modifying code has many intriguing applications in a broad range of fields including software security, artificial general intelligence, and open-ended evolution. Having control over self-modifying code, however, is still an open challenge since it is a balancing act between providing as much freedom as possible so as not to limit possible solutions, while at the same time imposing restriction to avoid security issues and invalid code or solutions. In the present study, I provide a prototype implementation of how one might curb self-modifying code by introducing control mechanisms for code modifications within specific regions and for specific transitions between code and data. I show that this is possible to achieve with the so-called allagmatic method - a framework to formalise, model, implement, and interpret complex systems inspired by Gilbert Simondon's philosophy of individuation and Alfred North Whitehead's philosophy of organism. Thereby, the allagmatic method serves as guidance for self-modification based on concepts defined in a metaphysical framework. I conclude that the allagmatic method seems to be a suitable framework for control mechanisms in self-modifying code and that there are intriguing analogies between the presented control mechanisms and gene regulation.
Germán Cano Cuenca: Transición Nietzsche, Valencia, Pre-Textos, 2020
Jorge Polo Blanco
Metaphysics, Philosophy (General)
Dynamic Cognition Applied to Value Learning in Artificial Intelligence
Nythamar de Oliveira, Nicholas Kluge Corrêa
Experts in Artificial Intelligence (AI) development predict that advances in the development of intelligent systems and agents will reshape vital areas in our society. Nevertheless, if such an advance isn't done with prudence, it can result in negative outcomes for humanity. For this reason, several researchers in the area are trying to develop a robust, beneficial, and safe concept of artificial intelligence. Currently, several of the open problems in the field of AI research arise from the difficulty of avoiding unwanted behaviors of intelligent agents, and at the same time specifying what we want such systems to do. It is of utmost importance that artificial intelligent agents have their values aligned with human values, given the fact that we cannot expect an AI to develop our moral preferences simply because of its intelligence, as discussed in the Orthogonality Thesis. Perhaps this difficulty comes from the way we are addressing the problem of expressing objectives, values, and ends, using representational cognitive methods. A solution to this problem would be the dynamic cognitive approach proposed by Dreyfus, whose phenomenological philosophy defends that the human experience of being-in-the-world cannot be represented by the symbolic or connectionist cognitive methods. A possible approach to this problem would be to use theoretical models such as SED (situated embodied dynamics) to address the values learning problem in AI.
Why Are We Obsessed with "Understanding" Quantum Mechanics?
Stephen Boughn
Richard Feynman famously declared, "I think that I can safely say that nobody really understands quantum mechanics." Sean Carroll lamented the persistence of this sentiment in a recent opinion piece entitled, "Even Physicists Don't Understand Quantum Mechanics. Worse, they don't seem to want to understand it." Quantum mechanics is arguably the greatest achievement of modern science and by and large we absolutely understand quantum theory. Rather, the "understanding" to which these two statements evidently refer concerns the ontological status of theoretical constructs. For example, "Do quantum wave functions accurately depict physical reality?" The quantum measurement problems represents a collection of such queries and the conundrums to which they lead. Most physicists are content with foregoing such metaphysical issues, falling back on Bohr's Copenhagen interpretation, and then get on with the business of doing physics. I suspect that Carroll would criticize these physicists as being too dismissive of the wonderful mysteries of quantum mechanics by relegating its role to that of an algorithm for making predictions. Quite to the contrary, I maintain that those who still pursue a resolution to the measurement problem are burdened with a classical view of reality and fail to truly embrace the fundamental quantum aspects of nature.
en
physics.hist-ph, quant-ph
Redimir a Nietzsche (por enésima vez)
Mariano Rodríguez
Romero Cuevas, José Manuel, ¿Nietzsche contra Nietzsche? Ensayos de crítica filosófica inmanente. Madrid, Locus Solus, 2016, 244 págs.
Metaphysics, Philosophy (General)
A Response to Brian Welter’s Review of Peter Redpath’s The Moral Psychology of St. Thomas: An Introduction to Ragamuffin Ethics
Marvin Peláez
The main purpose of this response is twofold, to: (1) acknowledge and elaborate on aspects of Welter’s review that highlight key points in Redpath’s book, and (2) make some precisions and amplifications so that both authors can be better appreciated for what they offer in their works to contemporary readers.
Philosophy. Psychology. Religion, Metaphysics
La actualidad de la hermenéutica. Entrevista a Jean Grondin y Ramón Rodríguez
Iñigo Pérez Irigoyen, Antón Sánchez Testas, María Jou García
De manera paradójica, la actualidad de la hermenéutica debe buscarse en su diálogo con la tradición filosófica. Diálogo que daría comienzo, en primer lugar, con su recepción de Husserl, la cual no se puede entender tanto como una traición al proyecto fenomenológico cuanto como un llevarlo hasta sus últimas consecuencias. En segundo lugar, la vuelta sobre el proyecto moderno (no concebida como destrucción sino como intento de comprensión) conduciría a un dialógo con Kant y a la discusión de la posición de un sujeto trascendental como condición de la objetividad. Es precisamente esta idea de diálogo la que define la hermenéutica como una apertura a lo otro en tanto que otro y no como un ejercicio de asimilación. Finalmente, si esta apertura al otro no cristaliza en un proyecto ético definido es, justamente, porque trata de pensar el fundamento de lo ético.
Metaphysics, Philosophy (General)