Semantic Scholar Open Access 2016 1063 sitasi

Continuous Deep Q-Learning with Model-based Acceleration

S. Gu T. Lillicrap I. Sutskever S. Levine

Abstrak

Model-free reinforcement learning has been successfully applied to a range of challenging problems, and has recently been extended to handle large neural network policies and value functions. However, the sample complexity of modelfree algorithms, particularly when using high-dimensional function approximators, tends to limit their applicability to physical systems. In this paper, we explore algorithms and representations to reduce the sample complexity of deep reinforcement learning for continuous control tasks. We propose two complementary techniques for improving the efficiency of such algorithms. First, we derive a continuous variant of the Q-learning algorithm, which we call normalized advantage functions (NAF), as an alternative to the more commonly used policy gradient and actor-critic methods. NAF representation allows us to apply Q-learning with experience replay to continuous tasks, and substantially improves performance on a set of simulated robotic control tasks. To further improve the efficiency of our approach, we explore the use of learned models for accelerating model-free reinforcement learning. We show that iteratively refitted local linear models are especially effective for this, and demonstrate substantially faster learning on domains where such models are applicable.

Topik & Kata Kunci

Penulis (4)

S

S. Gu

T

T. Lillicrap

I

I. Sutskever

S

S. Levine

Format Sitasi

Gu, S., Lillicrap, T., Sutskever, I., Levine, S. (2016). Continuous Deep Q-Learning with Model-based Acceleration. https://www.semanticscholar.org/paper/d358d41c69450b171327ebd99462b6afef687269

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2016
Bahasa
en
Total Sitasi
1063×
Sumber Database
Semantic Scholar
Akses
Open Access ✓