Semantic Scholar Open Access 2020 582 sitasi

QPLEX: Duplex Dueling Multi-Agent Q-Learning

Jianhao Wang Zhizhou Ren Terry Liu Yang Yu Chongjie Zhang

Abstrak

We explore value-based multi-agent reinforcement learning (MARL) in the popular paradigm of centralized training with decentralized execution (CTDE). CTDE has an important concept, Individual-Global-Max (IGM) principle, which requires the consistency between joint and local action selections to support efficient local decision-making. However, in order to achieve scalability, existing MARL methods either limit representation expressiveness of their value function classes or relax the IGM consistency, which may suffer from instability risk or lead to poor performance. This paper presents a novel MARL approach, called duPLEX dueling multi-agent Q-learning (QPLEX), which takes a duplex dueling network architecture to factorize the joint value function. This duplex dueling structure encodes the IGM principle into the neural network architecture and thus enables efficient value function learning. Theoretical analysis shows that QPLEX achieves a complete IGM function class. Empirical experiments on StarCraft II micromanagement tasks demonstrate that QPLEX significantly outperforms state-of-the-art baselines in both online and offline data collection settings, and also reveal that QPLEX achieves high sample efficiency and can benefit from offline datasets without additional online exploration.

Penulis (5)

J

Jianhao Wang

Z

Zhizhou Ren

T

Terry Liu

Y

Yang Yu

C

Chongjie Zhang

Format Sitasi

Wang, J., Ren, Z., Liu, T., Yu, Y., Zhang, C. (2020). QPLEX: Duplex Dueling Multi-Agent Q-Learning. https://www.semanticscholar.org/paper/052c100d45f949c06e8419b504e319b442cb3f0a

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2020
Bahasa
en
Total Sitasi
582×
Sumber Database
Semantic Scholar
Akses
Open Access ✓