Semantic Scholar Open Access 2017 84 sitasi

Implementing the Deep Q-Network

Melrose Roderick J. MacGlashan Stefanie Tellex

Abstrak

The Deep Q-Network proposed by Mnih et al. [2015] has become a benchmark and building point for much deep reinforcement learning research. However, replicating results for complex systems is often challenging since original scientific publications are not always able to describe in detail every important parameter setting and software engineering solution. In this paper, we present results from our work reproducing the results of the DQN paper. We highlight key areas in the implementation that were not covered in great detail in the original paper to make it easier for researchers to replicate these results, including termination conditions and gradient descent algorithms. Finally, we discuss methods for improving the computational performance and provide our own implementation that is designed to work with a range of domains, and not just the original Arcade Learning Environment [Bellemare et al., 2013].

Topik & Kata Kunci

Penulis (3)

M

Melrose Roderick

J

J. MacGlashan

S

Stefanie Tellex

Format Sitasi

Roderick, M., MacGlashan, J., Tellex, S. (2017). Implementing the Deep Q-Network. https://www.semanticscholar.org/paper/42448430643f2bcad5cd54ef25d58182cb5f4b82

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2017
Bahasa
en
Total Sitasi
84×
Sumber Database
Semantic Scholar
Akses
Open Access ✓