Semantic Scholar Open Access 2022 177 sitasi

Proximal Policy Optimization With Policy Feedback

Yangyang Gu Ieee Yuhu Cheng Member Ieee C. L. Philip Chen Fellow Ieee Xuesong Wang Member

Abstrak

Proximal policy optimization (PPO) is a deep reinforcement learning algorithm based on the actor–critic (AC) architecture. In the classic AC architecture, the Critic (value) network is used to estimate the value function while the Actor (policy) network optimizes the policy according to the estimated value function. The efficiency of the classic AC architecture is limited due that the policy does not directly participate in the value function update. The classic AC architecture will make the value function estimation inaccurate, which will affect the performance of the PPO algorithm. For improvement, we designed a novel AC architecture with policy feedback (AC-PF) by introducing the policy into the update process of the value function and further proposed the PPO with policy feedback (PPO-PF). For the AC-PF architecture, the policy-based expected (PBE) value function and discount reward formulas are designed by drawing inspiration from expected Sarsa. In order to enhance the sensitivity of the value function to the change of policy and to improve the accuracy of PBE value estimation at the early learning stage, we proposed a policy update method based on the clipped discount factor. Moreover, we specifically defined the loss functions of the policy network and value network to ensure that the policy update of PPO-PF satisfies the unbiased estimation of the trust region. Experiments on Atari games and control tasks show that compared to PPO, PPO-PF has faster convergence speed, higher reward, and smaller variance of reward.

Topik & Kata Kunci

Penulis (4)

Y

Yangyang Gu

I

Ieee Yuhu Cheng Member

I

Ieee C. L. Philip Chen Fellow

I

Ieee Xuesong Wang Member

Format Sitasi

Gu, Y., Member, I.Y.C., Fellow, I.C.L.P.C., Member, I.X.W. (2022). Proximal Policy Optimization With Policy Feedback. https://doi.org/10.1109/TSMC.2021.3098451

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.1109/TSMC.2021.3098451
Informasi Jurnal
Tahun Terbit
2022
Bahasa
en
Total Sitasi
177×
Sumber Database
Semantic Scholar
DOI
10.1109/TSMC.2021.3098451
Akses
Open Access ✓