Semantic Scholar Open Access 2018 894 sitasi

Is Q-learning Provably Efficient?

Chi Jin Zeyuan Allen-Zhu Sébastien Bubeck Michael I. Jordan

Abstrak

Model-free reinforcement learning (RL) algorithms, such as Q-learning, directly parameterize and update value functions or policies without explicitly modeling the environment. They are typically simpler, more flexible to use, and thus more prevalent in modern deep RL than model-based approaches. However, empirical work has suggested that model-free algorithms may require more samples to learn [Deisenroth and Rasmussen 2011, Schulman et al. 2015]. The theoretical question of "whether model-free algorithms can be made sample efficient" is one of the most fundamental questions in RL, and remains unsolved even in the basic scenario with finitely many states and actions. We prove that, in an episodic MDP setting, Q-learning with UCB exploration achieves regret $\tilde{O}(\sqrt{H^3 SAT})$, where $S$ and $A$ are the numbers of states and actions, $H$ is the number of steps per episode, and $T$ is the total number of steps. This sample efficiency matches the optimal regret that can be achieved by any model-based approach, up to a single $\sqrt{H}$ factor. To the best of our knowledge, this is the first analysis in the model-free setting that establishes $\sqrt{T}$ regret without requiring access to a "simulator."

Penulis (4)

C

Chi Jin

Z

Zeyuan Allen-Zhu

S

Sébastien Bubeck

M

Michael I. Jordan

Format Sitasi

Jin, C., Allen-Zhu, Z., Bubeck, S., Jordan, M.I. (2018). Is Q-learning Provably Efficient?. https://www.semanticscholar.org/paper/03cc81e98942bdafd994af7a1d1e62a68ff8b682

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2018
Bahasa
en
Total Sitasi
894×
Sumber Database
Semantic Scholar
Akses
Open Access ✓