Do LLMs trust AI regulation? Emerging behaviour of game-theoretic LLM agents
Abstrak
There is general agreement that fostering trust and cooperation within the AI development ecosystem is essential to promote the adoption of trustworthy AI systems. By embedding Large Language Model (LLM) agents within an evolutionary game-theoretic framework, this paper investigates the complex interplay between AI developers, regulators and users, modelling their strategic choices under different regulatory scenarios. Evolutionary game theory (EGT) is used to quantitatively model the dilemmas faced by each actor, and LLMs provide additional degrees of complexity and nuances and enable repeated games and incorporation of personality traits. Our research identifies emerging behaviours of strategic AI agents, which tend to adopt more "pessimistic" (not trusting and defective) stances than pure game-theoretic agents. We observe that, in case of full trust by users, incentives are effective to promote effective regulation; however, conditional trust may deteriorate the "social pact". Establishing a virtuous feedback between users' trust and regulators' reputation thus appears to be key to nudge developers towards creating safe AI. However, the level at which this trust emerges may depend on the specific LLM used for testing. Our results thus provide guidance for AI regulation systems, and help predict the outcome of strategic LLM agents, should they be used to aid regulation itself.
Penulis (18)
Alessio Buscemi
Daniele Proverbio
Paolo Bova
Nataliya Balabanova
Adeela Bashir
Theodor Cimpeanu
Henrique Correia da Fonseca
Manh Hong Duong
Elias Fernandez Domingos
Antonio M. Fernandes
Marcus Krellner
Ndidi Bianca Ogbo
Simon T. Powers
Fernando P. Santos
Zia Ush Shamszaman
Zhao Song
Alessandro Di Stefano
The Anh Han
Akses Cepat
- Tahun Terbit
- 2025
- Bahasa
- en
- Sumber Database
- arXiv
- Akses
- Open Access ✓