arXiv Open Access 2025

Presenting Large Language Models as Companions Affects What Mental Capacities People Attribute to Them

Allison Chen Sunnie S. Y. Kim Angel Franyutti Amaya Dharmasiri Kushin Mukherjee +2 lainnya
Lihat Sumber

Abstrak

How might messages about large language models (LLMs) found in public discourse influence the way people think about and interact with these models? To explore this question, we randomly assigned participants (N = 470) to watch short informational videos presenting LLMs as either machines, tools, or companions -- or to watch no video. We then assessed how strongly they believed LLMs to possess various mental capacities, such as the ability to have intentions or remember things. We found that participants who watched video messages presenting LLMs as companions reported believing that LLMs more fully possessed these capacities than did participants in other groups. In a follow-up study (N = 604), we replicated these findings and found nuanced effects on how these videos also impact people's reliance on LLM-generated responses when seeking out factual information. Together, these studies suggest that messages about LLMs -- beyond technical advances -- may shape what people believe about these systems and how they rely on LLM-generated responses.

Topik & Kata Kunci

Penulis (7)

A

Allison Chen

S

Sunnie S. Y. Kim

A

Angel Franyutti

A

Amaya Dharmasiri

K

Kushin Mukherjee

O

Olga Russakovsky

J

Judith E. Fan

Format Sitasi

Chen, A., Kim, S.S.Y., Franyutti, A., Dharmasiri, A., Mukherjee, K., Russakovsky, O. et al. (2025). Presenting Large Language Models as Companions Affects What Mental Capacities People Attribute to Them. https://arxiv.org/abs/2510.18039

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓