DOAJ Open Access 2026

The Illusion of Trust in AI: Behavioural Differences Between Humans and Large Language Models

Yuzhan Hang Zhenhua Ling Quan Liu Xiaosong He Wei Wu

Abstrak

As artificial intelligence (AI) systems increasingly enter trust-dependent domains, questions arise about whether their behaviour reflects genuine trustworthiness or merely the illusion of it. This study examined how humans and large language models (LLMs) establish and adjust trust in dynamic social interactions using a 50-round trust game. Across 100 human participants and three leading LLMs—ChatGPT-3.5, ChatGPT-4o and DeepSeek-V3—we compared trust trajectories, responsiveness to partner behaviour and reactions to unexpected outcomes. Human participants adjusted trust in line with partner trustworthiness and exhibited symmetrical responses to unexpected gains and violations. In contrast, LLMs showed fixed, model-specific behaviour with little to no adaptation based on interaction history. Despite their cooperative appearance, AI agents lacked mechanisms for social learning and trust calibration. These findings highlight a fundamental disconnect between perceived and actual AI behaviour and underscore the need for cautious interpretation of AI trust signals in socially sensitive contexts.

Penulis (5)

Y

Yuzhan Hang

Z

Zhenhua Ling

Q

Quan Liu

X

Xiaosong He

W

Wei Wu

Format Sitasi

Hang, Y., Ling, Z., Liu, Q., He, X., Wu, W. (2026). The Illusion of Trust in AI: Behavioural Differences Between Humans and Large Language Models. https://doi.org/10.1155/hbe2/3628473

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.1155/hbe2/3628473
Informasi Jurnal
Tahun Terbit
2026
Sumber Database
DOAJ
DOI
10.1155/hbe2/3628473
Akses
Open Access ✓