arXiv Open Access 2025

Don't Trust Generative Agents to Mimic Communication on Social Networks Unless You Benchmarked their Empirical Realism

Simon Münker Nils Schwager Achim Rettinger
Lihat Sumber

Abstrak

The ability of Large Language Models (LLMs) to mimic human behavior triggered a plethora of computational social science research, assuming that empirical studies of humans can be conducted with AI agents instead. Since there have been conflicting research findings on whether and when this hypothesis holds, there is a need to better understand the differences in their experimental designs. We focus on replicating the behavior of social network users with the use of LLMs for the analysis of communication on social networks. First, we provide a formal framework for the simulation of social networks, before focusing on the sub-task of imitating user communication. We empirically test different approaches to imitate user behavior on X in English and German. Our findings suggest that social simulations should be validated by their empirical realism measured in the setting in which the simulation components were fitted. With this paper, we argue for more rigor when applying generative-agent-based modeling for social simulation.

Topik & Kata Kunci

Penulis (3)

S

Simon Münker

N

Nils Schwager

A

Achim Rettinger

Format Sitasi

Münker, S., Schwager, N., Rettinger, A. (2025). Don't Trust Generative Agents to Mimic Communication on Social Networks Unless You Benchmarked their Empirical Realism. https://arxiv.org/abs/2506.21974

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓