arXiv Open Access 2025

Divergence-Augmented Policy Optimization

Qing Wang Yingru Li Jiechao Xiong Tong Zhang
Lihat Sumber

Abstrak

In deep reinforcement learning, policy optimization methods need to deal with issues such as function approximation and the reuse of off-policy data. Standard policy gradient methods do not handle off-policy data well, leading to premature convergence and instability. This paper introduces a method to stabilize policy optimization when off-policy data are reused. The idea is to include a Bregman divergence between the behavior policy that generates the data and the current policy to ensure small and safe policy updates with off-policy data. The Bregman divergence is calculated between the state distributions of two policies, instead of only on the action probabilities, leading to a divergence augmentation formulation. Empirical experiments on Atari games show that in the data-scarce scenario where the reuse of off-policy data becomes necessary, our method can achieve better performance than other state-of-the-art deep reinforcement learning algorithms.

Topik & Kata Kunci

Penulis (4)

Q

Qing Wang

Y

Yingru Li

J

Jiechao Xiong

T

Tong Zhang

Format Sitasi

Wang, Q., Li, Y., Xiong, J., Zhang, T. (2025). Divergence-Augmented Policy Optimization. https://arxiv.org/abs/2501.15034

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓