Individualism and Collectivism
H. Triandis
Originally published in Contemporary Psychology: APA Review of Books, 1996, Vol 41(6), 540–542. To truly follow cross-cultural psychology one must know how the terms, individualism and collectivism, are used by an ever-growing legion of users. According to the reviewer, no one is better equipped to
4750 sitasi
en
Psychology
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to make their algorithms more understandable. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a `good' explanation. There exists vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations towards the explanation process. This paper argues that the field of explainable artificial intelligence should build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.
5072 sitasi
en
Computer Science
The theory of planned behaviour: Reactions and reflections
I. Ajzen
4350 sitasi
en
Psychology, Medicine
A Coefficient of Agreement for Nominal Scales
Jacob Cohen
42069 sitasi
en
Psychology
What Is Coefficient Alpha? An Examination of Theory and Applications
Jose M. Cortina
8586 sitasi
en
Psychology
Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence
J. Holland
41128 sitasi
en
Computer Science, Engineering
Thought and language.
D. Laplane
From aphasics' self records, common experience, changes in signification of sentences according to a verbal or non-verbal context, animals and non speaking children performances, it seems possible to get some evidence that thought is distinct from language even though there is a permanent interaction between both in normal adult human beings. Some considerations on formalisation of language suggests that the more formalised it is, the less information it contains. If it is true, it is not reasonable to hope that a formalised language like that used by computers may be a model for thought. Finally, the lack of status of thought, as far as it is a subjective experience and the impossibility of giving it a definition as far as it exceeds language, make it clear that in spite of progress in scientific psychology, thought, per se, is not an object for science.
14934 sitasi
en
Psychology, Medicine
A PSYCHOLOGY OF RUMOR
R. H. Knapp
The Psychology of Self-Determination
E. Deci
From folk psychology to cognitive science: The case against belief.
S. Stich
714 sitasi
en
Philosophy, Psychology
Cultural psychology: The keynote address
J. Stigler, Richard A. Schweder, G. Herdt
672 sitasi
en
Psychology, Sociology
Mental Models of Causal Structure in Economics and Psychology
Sandro Ambuehl, Rahul Bhui, Heidi C. Thysen
A burgeoning literature in economics studies how people form beliefs about the causal structures linking economic variables, and what happens when those beliefs are mistaken. We survey this research and connect it to a rich literature in cognitive science. After providing an accessible introduction to causal Directed Acyclic Graphs, the dominant modeling approach, we review theory and evidence addressing three nested questions: how individuals reason within a fully parameterized causal structure, how they estimate its parameters, and how they learn such structures to begin with. We then discuss methodological challenges and review applications in microeconomics, macroeconomics, political economy, and business.
A Magic Act in Causal Reasoning: Making Markov Violations Disappear
Bob Rehder
A desirable property of any theory of causal reasoning is to explain not only why people make causal reasoning errors but also <i>when</i> they make them. The <i>mutation sampler</i> is a rational process model of human causal reasoning that yields normatively correct inferences when sufficient cognitive resources are available but introduces systematic errors when they are not. The mutation sampler has been shown to account for a number of causal reasoning errors, including <i>Markov violations</i>, the phenomenon in which human reasoners treat causally related variables as statistically dependent when they are normatively independent. A Markov violation arises, for example, when an individual reasoning about a causal chain <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>X</mi><mo>→</mo><mi>Y</mi><mo>→</mo><mi>Z</mi></mrow></semantics></math></inline-formula> treats <i>X</i> as informative about the state of <i>Z</i> even when the state of <i>Y</i> is known. Recently, the mutation sampler was used to predict the existence of previously untested experimental conditions in which the <i>sign</i> of Markov violations would switch from positive to negative. Here, it was used to predict the existence of conditions in which Markov violations should <i>disappear</i> entirely. In fact, asking subjects to reason about a novel causal structure with nothing but <i>generative</i> causal relations (a cause makes its effect more likely) resulted in Markov violations in the usual positive direction. But simply describing one of four causal relations as <i>inhibitory</i> (the cause makes its effect less likely) resulted in the elimination of those violations. Theoretical model fitting confirmed how this novel result is predicted by the mutation sampler.
Autonomous Learning with High-Dimensional Computing Architecture Similar to von Neumann's
Pentti Kanerva
We model human and animal learning by computing with high-dimensional vectors (H = 10,000 for example). The architecture resembles traditional (von Neumann) computing with numbers, but the instructions refer to vectors and operate on them in superposition. The architecture includes a high-capacity memory for vectors, analogue of the random-access memory (RAM) for numbers. The model's ability to learn from data reminds us of deep learning, but with an architecture closer to biology. The architecture agrees with an idea from psychology that human memory and learning involve a short-term working memory and a long-term data store. Neuroscience provides us with a model of the long-term memory, namely, the cortex of the cerebellum. With roots in psychology, biology, and traditional computing, a theory of computing with vectors can help us understand how brains compute. Application to learning by robots seems inevitable, but there is likely to be more, including language. Ultimately we want to compute with no more material and energy than used by brains. To that end, we need a mathematical theory that agrees with psychology and biology, and is suitable for nanotechnology. We also need to exercise the theory in large-scale experiments. Computing with vectors is described here in terms familiar to us from traditional computing with numbers.
Mitigating Gambling-Like Risk-Taking Behaviors in Large Language Models: A Behavioral Economics Approach to AI Safety
Y. Du
Large Language Models (LLMs) exhibit systematic risk-taking behaviors analogous to those observed in gambling psychology, including overconfidence bias, loss-chasing tendencies, and probability misjudgment. Drawing from behavioral economics and prospect theory, we identify and formalize these "gambling-like" patterns where models sacrifice accuracy for high-reward outputs, exhibit escalating risk-taking after errors, and systematically miscalibrate uncertainty. We propose the Risk-Aware Response Generation (RARG) framework, incorporating insights from gambling research to address these behavioral biases through risk-calibrated training, loss-aversion mechanisms, and uncertainty-aware decision making. Our approach introduces novel evaluation paradigms based on established gambling psychology experiments, including AI adaptations of the Iowa Gambling Task and probability learning assessments. Experimental results demonstrate measurable reductions in gambling-like behaviors: 18.7\% decrease in overconfidence bias, 24.3\% reduction in loss-chasing tendencies, and improved risk calibration across diverse scenarios. This work establishes the first systematic framework for understanding and mitigating gambling psychology patterns in AI systems.
The Silicon Psyche: Anthropomorphic Vulnerabilities in Large Language Models
Giuseppe Canale, Kashyap Thimmaraju
Large Language Models (LLMs) are rapidly transitioning from conversational assistants to autonomous agents embedded in critical organizational functions, including Security Operations Centers (SOCs), financial systems, and infrastructure management. Current adversarial testing paradigms focus predominantly on technical attack vectors: prompt injection, jailbreaking, and data exfiltration. We argue this focus is catastrophically incomplete. LLMs, trained on vast corpora of human-generated text, have inherited not merely human knowledge but human \textit{psychological architecture} -- including the pre-cognitive vulnerabilities that render humans susceptible to social engineering, authority manipulation, and affective exploitation. This paper presents the first systematic application of the Cybersecurity Psychology Framework (\cpf{}), a 100-indicator taxonomy of human psychological vulnerabilities, to non-human cognitive agents. We introduce the \textbf{Synthetic Psychometric Assessment Protocol} (\sysname{}), a methodology for converting \cpf{} indicators into adversarial scenarios targeting LLM decision-making. Our preliminary hypothesis testing across seven major LLM families reveals a disturbing pattern: while models demonstrate robust defenses against traditional jailbreaks, they exhibit critical susceptibility to authority-gradient manipulation, temporal pressure exploitation, and convergent-state attacks that mirror human cognitive failure modes. We term this phenomenon \textbf{Anthropomorphic Vulnerability Inheritance} (AVI) and propose that the security community must urgently develop ``psychological firewalls'' -- intervention mechanisms adapted from the Cybersecurity Psychology Intervention Framework (\cpif{}) -- to protect AI agents operating in adversarial environments.
A Motivational Driver Steering Model: Task Difficulty Homeostasis From Control Theory Perspective
H. Mozaffari, A. Nahvi
A general and psychologically plausible collision avoidance driver model can improve transportation safety significantly. Most computational driver models found in the literature have used control theory methods only, and they are not established based on psychological theories. In this paper, a unified approach is presented based on concepts taken from psychology and control theory. The "task difficulty homeostasis theory", a prominent motivational theory, is combined with the "Lyapunov stability method" in control theory to present a general and psychologically plausible model. This approach is used to model driver steering behavior for collision avoidance. The performance of this model is measured by simulation of two collision avoidance scenarios at a wide range of speeds from 20 km/h to 170 km/h. The model is validated by experiments on a driving simulator. The results demonstrate that the model follows human behavior accurately with a mean error of 7 percent.
Towards a Formal Theory of the Need for Competence via Computational Intrinsic Motivation
Erik M. Lintunen, Nadia M. Ady, Sebastian Deterding
et al.
Computational modelling offers a powerful tool for formalising psychological theories, making them more transparent, testable, and applicable in digital contexts. Yet, the question often remains: how should one computationally model a theory? We provide a demonstration of how formalisms taken from artificial intelligence can offer a fertile starting point. Specifically, we focus on the "need for competence", postulated as a key basic psychological need within Self-Determination Theory (SDT) -- arguably the most influential framework for intrinsic motivation (IM) in psychology. Recent research has identified multiple distinct facets of competence in key SDT texts: effectance, skill use, task performance, and capacity growth. We draw on the computational IM literature in reinforcement learning to suggest that different existing formalisms may be appropriate for modelling these different facets. Using these formalisms, we reveal underlying preconditions that SDT fails to make explicit, demonstrating how computational models can improve our understanding of IM. More generally, our work can support a cycle of theory development by inspiring new computational models, which can then be tested empirically to refine the theory. Thus, we provide a foundation for advancing competence-related theory in SDT and motivational psychology more broadly.
Key-value memory in the brain
Samuel J. Gershman, Ila Fiete, Kazuki Irie
Classical models of memory in psychology and neuroscience rely on similarity-based retrieval of stored patterns, where similarity is a function of retrieval cues and the stored patterns. While parsimonious, these models do not allow distinct representations for storage and retrieval, despite their distinct computational demands. Key-value memory systems, in contrast, distinguish representations used for storage (values) and those used for retrieval (keys). This allows key-value memory systems to optimize simultaneously for fidelity in storage and discriminability in retrieval. We review the computational foundations of key-value memory, its role in modern machine learning systems, related ideas from psychology and neuroscience, applications to a number of empirical puzzles, and possible biological implementations.
Human Creativity and AI
Shengyi Xie
With the advancement of science and technology, the philosophy of creativity has undergone significant reinterpretation. This paper investigates contemporary research in the fields of psychology, cognitive neuroscience, and the philosophy of creativity, particularly in the context of the development of artificial intelligence (AI) techniques. It aims to address the central question: Can AI exhibit creativity? The paper reviews the historical perspectives on the philosophy of creativity and explores the influence of psychological advancements on the study of creativity. Furthermore, it analyzes various definitions of creativity and examines the responses of naturalism and cognitive neuroscience to the concept of creativity.