Semantic Scholar Open Access 2017 4878 sitasi

Deep Reinforcement Learning from Human Preferences

P. Christiano Jan Leike Tom B. Brown Miljan Martic S. Legg +1 lainnya

Abstrak

For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback.

Penulis (6)

P

P. Christiano

J

Jan Leike

T

Tom B. Brown

M

Miljan Martic

S

S. Legg

D

Dario Amodei

Format Sitasi

Christiano, P., Leike, J., Brown, T.B., Martic, M., Legg, S., Amodei, D. (2017). Deep Reinforcement Learning from Human Preferences. https://www.semanticscholar.org/paper/5bbb6f9a8204eb13070b6f033e61c84ef8ee68dd

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2017
Bahasa
en
Total Sitasi
4878×
Sumber Database
Semantic Scholar
Akses
Open Access ✓