arXiv Open Access 2026

Belief Offloading in Human-AI Interaction

Rose E. Guingrich Dvija Mehta Umang Bhatt
Lihat Sumber

Abstrak

What happens when people's beliefs are derived from information provided by an LLM? People's use of LLM chatbots as thought partners can contribute to cognitive offloading, which can have adverse effects on cognitive skills in cases of over-reliance. This paper defines and investigates a particular kind of cognitive offloading in human-AI interaction, "belief offloading," in which people's processes of forming and upholding beliefs are offloaded onto an AI system with downstream consequences on their behavior and the nature of their system of beliefs. Drawing on philosophy, psychology, and computer science research, we clarify the boundary conditions under which belief offloading occurs and provide a descriptive taxonomy of belief offloading and its normative implications. We close with directions for future work to assess the potential for and consequences of belief offloading in human-AI interaction.

Topik & Kata Kunci

Penulis (3)

R

Rose E. Guingrich

D

Dvija Mehta

U

Umang Bhatt

Format Sitasi

Guingrich, R.E., Mehta, D., Bhatt, U. (2026). Belief Offloading in Human-AI Interaction. https://arxiv.org/abs/2602.08754

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓