arXiv Open Access 2024

Off-OAB: Off-Policy Policy Gradient Method with Optimal Action-Dependent Baseline

Wenjia Meng Qian Zheng Long Yang Yilong Yin Gang Pan
Lihat Sumber

Abstrak

Policy-based methods have achieved remarkable success in solving challenging reinforcement learning problems. Among these methods, off-policy policy gradient methods are particularly important due to that they can benefit from off-policy data. However, these methods suffer from the high variance of the off-policy policy gradient (OPPG) estimator, which results in poor sample efficiency during training. In this paper, we propose an off-policy policy gradient method with the optimal action-dependent baseline (Off-OAB) to mitigate this variance issue. Specifically, this baseline maintains the OPPG estimator's unbiasedness while theoretically minimizing its variance. To enhance practical computational efficiency, we design an approximated version of this optimal baseline. Utilizing this approximation, our method (Off-OAB) aims to decrease the OPPG estimator's variance during policy optimization. We evaluate the proposed Off-OAB method on six representative tasks from OpenAI Gym and MuJoCo, where it demonstrably surpasses state-of-the-art methods on the majority of these tasks.

Topik & Kata Kunci

Penulis (5)

W

Wenjia Meng

Q

Qian Zheng

L

Long Yang

Y

Yilong Yin

G

Gang Pan

Format Sitasi

Meng, W., Zheng, Q., Yang, L., Yin, Y., Pan, G. (2024). Off-OAB: Off-Policy Policy Gradient Method with Optimal Action-Dependent Baseline. https://arxiv.org/abs/2405.02572

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓