A new contextualised reading of Fritz Zwicky's 1933 article ''The redshift of extragalactic nebulae'' about the virial analysis of the velocity dispersion of galaxies in the Coma cluster leads to a reconsideration of the traditional discourse on the introduction of dark matter. We argue that this component of matter was not only already on the stage of the scientific debates of the time, but also, in a more concealed form, played a central role in Zwicky's epistemic context. We thus reject the narration that dark matter is the result of a ``na{ï}ve'' astrophysical observation and emphasise the cosmological motivations that prompted Zwicky to presciently search for it. Moreover, with regard to its abundance, we argue that the discrepancy between the observed amount of luminous matter in the Coma Cluster and Zwicky's higher mass estimate derived from virial analysis was not, in fact, astonishing. What Zwicky described as a surprising excess of dark matter was of precisely the order of magnitude he had set out to identify. Consequently, we challenge the widespread view that dark matter was merely an ad hoc hypothesis introduced to rescue Newtonian theory. Instead, we suggest it may represent one of the earliest cosmological indications supporting a new emerging theory of gravitation: General Relativity. This reinterpretation contributes to ongoing debates in the philosophy of science concerning the epistemic status of ad hoc hypotheses.
Cognitive Science has profoundly shaped disciplines such as Artificial Intelligence (AI), Philosophy, Psychology, Neuroscience, Linguistics, and Culture. Many breakthroughs in AI trace their roots to cognitive theories, while AI itself has become an indispensable tool for advancing cognitive research. This reciprocal relationship motivates a comprehensive review of the intersections between AI and Cognitive Science. By synthesizing key contributions from both perspectives, we observe that AI progress has largely emphasized practical task performance, whereas its cognitive foundations remain conceptually fragmented. We argue that the future of AI within Cognitive Science lies not only in improving performance but also in constructing systems that deepen our understanding of the human mind. Promising directions include aligning AI behaviors with cognitive frameworks, situating AI in embodiment and culture, developing personalized cognitive models, and rethinking AI ethics through cognitive co-evaluation.
I consider the sense in which teleparallel gravity and symmetric teleparallel gravity may be understood as gauge theories of gravity. I first argue that both theories have surplus structure. I then consider the relationship between Yang-Mills theory and Poincare Gauge Theory and argue that though these use similar formalisms, there are subtle disanalogies in their interpretation.
Understanding how information is dynamically accumulated and transformed in human reasoning has long challenged cognitive psychology, philosophy, and artificial intelligence. Existing accounts, from classical logic to probabilistic models, illuminate aspects of output or individual modelling, but do not offer a unified, quantitative description of general human reasoning dynamics. To solve this, we introduce Information Flow Tracking (IF-Track), that uses large language models (LLMs) as probabilistic encoder to quantify information entropy and gain at each reasoning step. Through fine-grained analyses across diverse tasks, our method is the first successfully models the universal landscape of human reasoning behaviors within a single metric space. We show that IF-Track captures essential reasoning features, identifies systematic error patterns, and characterizes individual differences. Applied to discussion of advanced psychological theory, we first reconcile single- versus dual-process theories in IF-Track and discover the alignment of artificial and human cognition and how LLMs reshaping human reasoning process. This approach establishes a quantitative bridge between theory and measurement, offering mechanistic insights into the architecture of reasoning.
Hatem Elshatlawy, Dean Rickles, Xerxes D. Arsiwalla
We propose a formal framework for understanding and unifying the concept of observers across physics, computer science, philosophy, and related fields. Building on cybernetic feedback models, we introduce an operational definition of minimal observers, explore their role in shaping foundational concepts, and identify what remains unspecified in their absence. Drawing upon insights from quantum gravity, digital physics, second-order cybernetics, and recent ruliological and pregeometric approaches, we argue that observers serve as indispensable reference points for measurement, reference frames, and the emergence of meaning. We show how this formalism sheds new light on debates related to consciousness, quantum measurement, and computational boundaries; by way of theorems on observer equivalences and complexity measures. This perspective opens new avenues for investigating how complexity and structure arise in both natural and artificial systems.
This book collects the lectures about graph theory and its applications which were given to students of mathematical departments of Moscow State University and Peking University. Graph theory is a very wide field with a lot of applications in almost every scientific area: in many branches of mathematics, computer science, physics, chemistry, biology and also in psychology, arts, philosophy and many others. Nowadays, graph theory becomes especially more important because of the rapid development of molecular biology, neural networks and AI fields. One of the aims of writing this book was to give students thorough knowledge about graphs to understand modern scientific fields more deeply. Here we tried to give classical and modern theorems and algorithms in more understandable and simple way. We spent many time to rewrite them and close the gaps in several simplest well-known proofs to provide more precise and accurate material for students.
Samuel Pawel, Rachel Heyard, Charlotte Micheloud
et al.
In several large-scale replication projects, statistically non-significant results in both the original and the replication study have been interpreted as a "replication success". Here we discuss the logical problems with this approach: Non-significance in both studies does not ensure that the studies provide evidence for the absence of an effect and "replication success" can virtually always be achieved if the sample sizes are small enough. In addition, the relevant error rates are not controlled. We show how methods, such as equivalence testing and Bayes factors, can be used to adequately quantify the evidence for the absence of an effect and how they can be applied in the replication setting. Using data from the Reproducibility Project: Cancer Biology, the Experimental Philosophy Replicability Project, and the Reproducibility Project: Psychology we illustrate that many original and replication studies with "null results" are in fact inconclusive. We conclude that it is important to also replicate studies with statistically non-significant results, but that they should be designed, analyzed, and interpreted appropriately.
Social media has expanded in its use, and reach, since the inception of early social networks in the early 2000s. Increasingly, users turn to social media for keeping up to date with current affairs and information. However, social media is increasingly used to promote disinformation and cause harm. In this contribution, we argue that as information (eco)systems, social media sites are vulnerable from three aspects, each corresponding to the classical 3-tier architecture in information systems: asymmetric networks (data tier); algorithms powering the supposed personalisation for the user experience (application tier); and adverse or audacious design of the user experience and overall information ecosystem (presentation tier) - which can be summarized as the 3 A's. Thus, the open question remains: how can we 'fix' social media? We will unpack suggestions from various allied disciplines - from philosophy to data ethics to social psychology - in untangling the 3A's above.
Shiva Omrani Sabbaghi, Robert Wolfe, Aylin Caliskan
Language models are trained on large-scale corpora that embed implicit biases documented in psychology. Valence associations (pleasantness/unpleasantness) of social groups determine the biased attitudes towards groups and concepts in social cognition. Building on this established literature, we quantify how social groups are valenced in English language models using a sentence template that provides an intersectional context. We study biases related to age, education, gender, height, intelligence, literacy, race, religion, sex, sexual orientation, social class, and weight. We present a concept projection approach to capture the valence subspace through contextualized word embeddings of language models. Adapting the projection-based approach to embedding association tests that quantify bias, we find that language models exhibit the most biased attitudes against gender identity, social class, and sexual orientation signals in language. We find that the largest and better-performing model that we study is also more biased as it effectively captures bias embedded in sociocultural data. We validate the bias evaluation method by overperforming on an intrinsic valence evaluation task. The approach enables us to measure complex intersectional biases as they are known to manifest in the outputs and applications of language models that perpetuate historical biases. Moreover, our approach contributes to design justice as it studies the associations of groups underrepresented in language such as transgender and homosexual individuals.
We consider the duality between General Relativity and the theory of Einstein algebras, in the extended setting where one permits non-Hausdorff manifolds. We show that the duality breaks down, and then go on to discuss a sense in which general relativity, formulated using non-Hausdorff manifolds, exhibits excess structure when compared to Einstein algebras. We discuss how these results bear on a class of algebraically-motivated deflationist views about spacetime ontology. We conclude with a conjecture concerning non-Hausdorff spacetimes with no bifurcate curves.
Alexander Etkind, <em> Nature’s Evil: A Cultural History of Natural Resources </em> (2021): - Helen Thompson in ‘The New Age of Tragedy’, <em> The New Statesman </em> (2023): - Adam Hanieh, ‘Petrochemical Empire: The Geo-Politics of Fossil-Fuelled Production’, <em> New Left Review </em> (2021): - Laleh Khalili, <em> Sinews of War and Trade: Shipping and Capitalism in the Arabian Peninsula </em> (2021): - Timothy Mitchell, <em> Carbon Democracy: </em>
Paul Valéry, ‘The Crisis of the Mind’ (1919): - Osip Mandelstam, ‘The Nineteenth Century’ (1922): - Walter Benjamin, ‘Experience and poverty’ (1933): - Stefan Zweig, Diary (Autumn 1939): - René Char, <em> Hypnos </em> (1946):
Quotes taken from ‘Putin’s war’, <em> New York Times </em> (16 December 2022). Leo Tolstoy, <em> Bethink Yourselves </em> (1904). Ken Jowitt, ‘Undemocratic Past, Unnamed Present, Undecided Future’ (1996). Ken Jowitt, ‘Setting History's Course’ (2009). Adam Curtis in <em> The Guardian </em> on his recent series, <em> Russia 1985-1999: TraumaZone </em> (2022). Fyodor Dostoevsky, <em> Notes from Underground </em> (1864).Subscribe now
Imagining the real. Really imagining. There and here, blending and blurring, all together. Hayao Miyazaki, <em> Shuna’s Journey. </em> Christopher de Bellaigue on the possibilities of ‘an unstoppable spiral of state violence and popular fury’ in Iran. Alexander Baunov on Russia’s objectives. 2022 is set to be ‘a fabulous year’ for some. Ali Ansari on ‘failures of imagination’ in Iran.
Theories of quantum gravity generically presuppose or predict that the reality underlying relativistic spacetimes they are describing is significantly non-spatiotemporal. On pain of empirical incoherence, approaches to quantum gravity must establish how relativistic spacetime emerges from their non-spatiotemporal structures. We argue that in order to secure this emergence, it is sufficient to establish that only those features of relativistic spacetimes functionally relevant in producing empirical evidence must be recovered. In order to complete this task, an account must be given of how the more fundamental structures instantiate these functional roles. We illustrate the general idea in the context of causal set theory and loop quantum gravity, two prominent approaches to quantum gravity.
It has been 61 years since Hugh Everett III's PhD dissertation, {\it On the Foundations of Quantum Mechanics}, was submitted to the Princeton University Physics Department. After more than a decade of relative obscurity it was resurrected by Bryce DeWitt as {\it The Many Worlds Interpretation of Quantum Mechanics} and since then has become an active topic of discussion, reinterpretation, and modification, especially among philosophers of science, quantum cosmologists, and advocates of quantum decoherence and quantum computing. Many of these analyses are quite sophisticated and considered to be important contributions to physics and philosophy. I am primarily an experimental physicist and my pragmatic ruminations on the subject might viewed with some suspicion. Indeed, Bohr's pragmatic {\it Copenhagen Interpretation} is often disparaged by this same cohort. Still, I think that my experimentalist's vantage point has something to offer and I here offer it to you.
This work outlines the novel application of the empirical analysis of causation, presented by Kutach, to the study of information theory and its role in physics. The central thesis of this paper is that causation and information are identical functional tools for distinguishing controllable correlations, and that this leads to a consistent view, not only of information theory, but also of statistical physics and quantum information. This approach comes without the metaphysical baggage of declaring information a fundamental ingredient in physical reality and exorcises many of the otherwise puzzling problems that arise from this view-point, particularly obviating the problem of `excess baggage' in quantum mechanics.
Sebastian De Haro, Nicholas Teh, Jeremy N. Butterfield
We discuss some aspects of the relation between dualities and gauge symmetries. Both of these ideas are of course multi-faceted, and we confine ourselves to making two points. Both points are about dualities in string theory, and both have the 'flavour' that two dual theories are 'closer in content' than you might think. For both points, we adopt a simple conception of a duality as an 'isomorphism' between theories: more precisely, as appropriate bijections between the two theories' sets of states and sets of quantities. The first point (Section 3) is that this conception of duality meshes with two dual theories being 'gauge related' in the general philosophical sense of being physically equivalent. For a string duality, such as T-duality and gauge/gravity duality, this means taking such features as the radius of a compact dimension, and the dimensionality of spacetime, to be 'gauge'. The second point (Sections 4, 5 and 6) is much more specific. We give a result about gauge/gravity duality that shows its relation to gauge symmetries (in the physical sense of symmetry transformations that are spacetime-dependent) to be subtler than you might expect. For gauge theories, you might expect that the duality bijections relate only gauge-invariant quantities and states, in the sense that gauge symmetries in one theory will be unrelated to any symmetries in the other theory. This may be so in general; and indeed, it is suggested by discussions of Polchinski and Horowitz. But we show that in gauge/gravity duality, each of a certain class of gauge symmetries in the gravity/bulk theory, viz. diffeomorphisms, is related by the duality to a position-dependent symmetry of the gauge/boundary theory.
I examine the debate between substantivalists and relationalists about the ontological character of spacetime and conclude it is not well posed. I argue that the so-called Hole Argument does not bear on the debate, because it provides no clear criterion to distinguish the positions. I propose two such precise criteria and construct separate arguments based on each to yield contrary conclusions, one supportive of something like relationalism and the other of something like substantivalism. The lesson is that one must fix an investigative context in order to make such criteria precise, but different investigative contexts yield inconsistent results. I examine questions of existence about spacetime structures other than the spacetime manifold itself to argue that it is more fruitful to focus on pragmatic issues of physicality, a notion that lends itself to several different explications, all of philosophical interest, none privileged a priori over any of the others. I conclude by suggesting an extension of the lessons of my arguments to the broader debate between realists and instrumentalists.
Lewis has recently argued that Maudlin's contingent absorber experiment remains a significant problem for the Transactional Interpretation (TI). He argues that the only straightforward way to resolve the challenge is by describing the absorbers as offer waves, and asserts that this is a previously unnoticed aspect of the challenge for TI. This argument is refuted in two basic ways: (i) it is noted that the Maudlin experiment cannot be meaningfully recast with absorbers described by quantum states; instead the author replaces it with an ordinary which-way experiment; and (ii) the extant rebuttals to the Maudlin challenge in its original form are not in fact subject to the alleged flaws that Lewis ascribes to them. This paper further seeks to clarify the issues raised in Lewis' presentation concerning the distinction between quantum systems and macroscopic objects in TI. It is noted that the latest, possibilist version of TI (PTI) has no ambiguity concerning macroscopic absorbers. In particular, macroscopic objects are not subject to indeterminate trajectories, since they are continually undergoing collapse. It is concluded that the Maudlin challenge poses no significant problem for the transactional interpretation.