Semantic Scholar Open Access 2019 1240 sitasi

Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction

Aviral Kumar Justin Fu G. Tucker S. Levine

Abstrak

Off-policy reinforcement learning aims to leverage experience collected from prior policies for sample-efficient learning. However, in practice, commonly used off-policy approximate dynamic programming methods based on Q-learning and actor-critic methods are highly sensitive to the data distribution, and can make only limited progress without collecting additional on-policy data. As a step towards more robust off-policy algorithms, we study the setting where the off-policy experience is fixed and there is no further interaction with the environment. We identify bootstrapping error as a key source of instability in current methods. Bootstrapping error is due to bootstrapping from actions that lie outside of the training data distribution, and it accumulates via the Bellman backup operator. We theoretically analyze bootstrapping error, and demonstrate how carefully constraining action selection in the backup can mitigate it. Based on our analysis, we propose a practical algorithm, bootstrapping error accumulation reduction (BEAR). We demonstrate that BEAR is able to learn robustly from different off-policy distributions, including random and suboptimal demonstrations, on a range of continuous control tasks.

Penulis (4)

A

Aviral Kumar

J

Justin Fu

G

G. Tucker

S

S. Levine

Format Sitasi

Kumar, A., Fu, J., Tucker, G., Levine, S. (2019). Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction. https://www.semanticscholar.org/paper/82b4b03a4659d6e04bd7cbf51d6e08fde1348dbd

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2019
Bahasa
en
Total Sitasi
1240×
Sumber Database
Semantic Scholar
Akses
Open Access ✓