arXiv Open Access 2025

Analysis of On-policy Policy Gradient Methods under the Distribution Mismatch

Weizhen Wang Jianping He Xiaoming Duan
Lihat Sumber

Abstrak

Policy gradient methods are one of the most successful approaches for solving challenging reinforcement learning problems. Despite their empirical successes, many state-of-the-art policy gradient algorithms for discounted problems deviate from the theoretical policy gradient theorem due to the existence of a distribution mismatch. In this work, we analyze the impact of this mismatch on policy gradient methods. Specifically, we first show that in the case of tabular parameterizations, the biased gradient induced by the mismatch still yields a valid first-order characterization of global optimality. Then, we extend this analysis to more general parameterizations by deriving explicit bounds on both the state distribution mismatch and the resulting gradient mismatch in episodic and continuing MDPs, which are shown to vanish at least linearly as the discount factor approaches one. Building on these bounds, we further establish guarantees for the biased policy gradient iterates, showing that they approach approximate stationary points with respect to the exact gradient, with asymptotic residuals depending on the discount factor. Our findings offer insights into the robustness of policy gradient methods as well as the gap between theoretical foundations and practical implementations.

Topik & Kata Kunci

Penulis (3)

W

Weizhen Wang

J

Jianping He

X

Xiaoming Duan

Format Sitasi

Wang, W., He, J., Duan, X. (2025). Analysis of On-policy Policy Gradient Methods under the Distribution Mismatch. https://arxiv.org/abs/2503.22244

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓