The Illusion of Trust in AI: Behavioural Differences Between Humans and Large Language Models
Abstrak
As artificial intelligence (AI) systems increasingly enter trust-dependent domains, questions arise about whether their behaviour reflects genuine trustworthiness or merely the illusion of it. This study examined how humans and large language models (LLMs) establish and adjust trust in dynamic social interactions using a 50-round trust game. Across 100 human participants and three leading LLMs—ChatGPT-3.5, ChatGPT-4o and DeepSeek-V3—we compared trust trajectories, responsiveness to partner behaviour and reactions to unexpected outcomes. Human participants adjusted trust in line with partner trustworthiness and exhibited symmetrical responses to unexpected gains and violations. In contrast, LLMs showed fixed, model-specific behaviour with little to no adaptation based on interaction history. Despite their cooperative appearance, AI agents lacked mechanisms for social learning and trust calibration. These findings highlight a fundamental disconnect between perceived and actual AI behaviour and underscore the need for cautious interpretation of AI trust signals in socially sensitive contexts.
Topik & Kata Kunci
Penulis (5)
Yuzhan Hang
Zhenhua Ling
Quan Liu
Xiaosong He
Wei Wu
Akses Cepat
- Tahun Terbit
- 2026
- Sumber Database
- DOAJ
- DOI
- 10.1155/hbe2/3628473
- Akses
- Open Access ✓