arXiv Open Access 2023

A Tutorial on Meta-Reinforcement Learning

Jacob Beck Risto Vuorio Evan Zheran Liu Zheng Xiong Luisa Zintgraf +2 lainnya
Lihat Sumber

Abstrak

While deep reinforcement learning (RL) has fueled multiple high-profile successes in machine learning, it is held back from more widespread adoption by its often poor data efficiency and the limited generality of the policies it produces. A promising approach for alleviating these limitations is to cast the development of better RL algorithms as a machine learning problem itself in a process called meta-RL. Meta-RL is most commonly studied in a problem setting where, given a distribution of tasks, the goal is to learn a policy that is capable of adapting to any new task from the task distribution with as little data as possible. In this survey, we describe the meta-RL problem setting in detail as well as its major variations. We discuss how, at a high level, meta-RL research can be clustered based on the presence of a task distribution and the learning budget available for each individual task. Using these clusters, we then survey meta-RL algorithms and applications. We conclude by presenting the open problems on the path to making meta-RL part of the standard toolbox for a deep RL practitioner.

Topik & Kata Kunci

Penulis (7)

J

Jacob Beck

R

Risto Vuorio

E

Evan Zheran Liu

Z

Zheng Xiong

L

Luisa Zintgraf

C

Chelsea Finn

S

Shimon Whiteson

Format Sitasi

Beck, J., Vuorio, R., Liu, E.Z., Xiong, Z., Zintgraf, L., Finn, C. et al. (2023). A Tutorial on Meta-Reinforcement Learning. https://arxiv.org/abs/2301.08028

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓