arXiv Open Access 2024

Multi-agent reinforcement learning in the all-or-nothing public goods game on networks

Benedikt Valentin Meylahn
Lihat Sumber

Abstrak

We study interpersonal trust by means of the all-or-nothing public goods game between agents on a network. The agents are endowed with the simple yet adaptive learning rule, exponential moving average, by which they estimate the behavior of their neighbors in the network. Theoretically we show that in the long-time limit this multi-agent reinforcement learning process always eventually results in indefinite contribution to the public good or indefinite defection (no agent contributing to the public good). However, by simulation of the pre-limit behavior, we see that on complex network structures there may be mixed states in which the process seems to stabilize before actual convergence to states in which agent beliefs and actions are all the same. In these metastable states the local network characteristics can determine whether agents have high or low trust in their neighbors. More generally it is found that more dense networks result in lower rates of contribution to the public good. This has implications for how one can spread global contribution toward a public good by enabling smaller local interactions.

Topik & Kata Kunci

Penulis (1)

B

Benedikt Valentin Meylahn

Format Sitasi

Meylahn, B.V. (2024). Multi-agent reinforcement learning in the all-or-nothing public goods game on networks. https://arxiv.org/abs/2412.20116

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓