arXiv Open Access 2024

Mitigating Relative Over-Generalization in Multi-Agent Reinforcement Learning

Ting Zhu Yue Jin Jeremie Houssineau Giovanni Montana
Lihat Sumber

Abstrak

In decentralized multi-agent reinforcement learning, agents learning in isolation can lead to relative over-generalization (RO), where optimal joint actions are undervalued in favor of suboptimal ones. This hinders effective coordination in cooperative tasks, as agents tend to choose actions that are individually rational but collectively suboptimal. To address this issue, we introduce MaxMax Q-Learning (MMQ), which employs an iterative process of sampling and evaluating potential next states, selecting those with maximal Q-values for learning. This approach refines approximations of ideal state transitions, aligning more closely with the optimal joint policy of collaborating agents. We provide theoretical analysis supporting MMQ's potential and present empirical evaluations across various environments susceptible to RO. Our results demonstrate that MMQ frequently outperforms existing baselines, exhibiting enhanced convergence and sample efficiency.

Topik & Kata Kunci

Penulis (4)

T

Ting Zhu

Y

Yue Jin

J

Jeremie Houssineau

G

Giovanni Montana

Format Sitasi

Zhu, T., Jin, Y., Houssineau, J., Montana, G. (2024). Mitigating Relative Over-Generalization in Multi-Agent Reinforcement Learning. https://arxiv.org/abs/2411.11099

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓