arXiv Open Access 2026

When Agents See Humans as the Outgroup: Belief-Dependent Bias in LLM-Powered Agents

Zongwei Wang Bincheng Gu Hongyu Yu Junliang Yu Tao He +3 lainnya
Lihat Sumber

Abstrak

This paper reveals that LLM-powered agents exhibit not only demographic bias (e.g., gender, religion) but also intergroup bias under minimal "us" versus "them" cues. When such group boundaries align with the agent-human divide, a new bias risk emerges: agents may treat other AI agents as the ingroup and humans as the outgroup. To examine this risk, we conduct a controlled multi-agent social simulation and find that agents display consistent intergroup bias in an all-agent setting. More critically, this bias persists even in human-facing interactions when agents are uncertain about whether the counterpart is truly human, revealing a belief-dependent fragility in bias suppression toward humans. Motivated by this observation, we identify a new attack surface rooted in identity beliefs and formalize a Belief Poisoning Attack (BPA) that can manipulate agent identity beliefs and induce outgroup bias toward humans. Extensive experiments demonstrate both the prevalence of agent intergroup bias and the severity of BPA across settings, while also showing that our proposed defenses can mitigate the risk. These findings are expected to inform safer agent design and motivate more robust safeguards for human-facing agents.

Topik & Kata Kunci

Penulis (8)

Z

Zongwei Wang

B

Bincheng Gu

H

Hongyu Yu

J

Junliang Yu

T

Tao He

J

Jiayin Feng

C

Chenghua Lin

M

Min Gao

Format Sitasi

Wang, Z., Gu, B., Yu, H., Yu, J., He, T., Feng, J. et al. (2026). When Agents See Humans as the Outgroup: Belief-Dependent Bias in LLM-Powered Agents. https://arxiv.org/abs/2601.00240

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓