arXiv Open Access 2025

Revealing Political Bias in LLMs through Structured Multi-Agent Debate

Aishwarya Bandaru Fabian Bindley Trevor Bluth Nandini Chavda Baixu Chen +1 lainnya
Lihat Sumber

Abstrak

Large language models (LLMs) are increasingly used to simulate social behaviour, yet their political biases and interaction dynamics in debates remain underexplored. We investigate how LLM type and agent gender attributes influence political bias using a structured multi-agent debate framework, by engaging Neutral, Republican, and Democrat American LLM agents in debates on politically sensitive topics. We systematically vary the underlying LLMs, agent genders, and debate formats to examine how model provenance and agent personas influence political bias and attitudes throughout debates. We find that Neutral agents consistently align with Democrats, while Republicans shift closer to the Neutral; gender influences agent attitudes, with agents adapting their opinions when aware of other agents' genders; and contrary to prior research, agents with shared political affiliations can form echo chambers, exhibiting the expected intensification of attitudes as debates progress.

Topik & Kata Kunci

Penulis (6)

A

Aishwarya Bandaru

F

Fabian Bindley

T

Trevor Bluth

N

Nandini Chavda

B

Baixu Chen

E

Ethan Law

Format Sitasi

Bandaru, A., Bindley, F., Bluth, T., Chavda, N., Chen, B., Law, E. (2025). Revealing Political Bias in LLMs through Structured Multi-Agent Debate. https://arxiv.org/abs/2506.11825

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓