arXiv
Open Access
2019
Learning Factored Markov Decision Processes with Unawareness
Craig Innes
Alex Lascarides
Abstrak
Methods for learning and planning in sequential decision problems often assume the learner is aware of all possible states and actions in advance. This assumption is sometimes untenable. In this paper, we give a method to learn factored markov decision problems from both domain exploration and expert assistance, which guarantees convergence to near-optimal behaviour, even when the agent begins unaware of factors critical to success. Our experiments show our agent learns optimal behaviour on small and large problems, and that conserving information on discovering new possibilities results in faster convergence.
Topik & Kata Kunci
Penulis (2)
C
Craig Innes
A
Alex Lascarides
Akses Cepat
Informasi Jurnal
- Tahun Terbit
- 2019
- Bahasa
- en
- Sumber Database
- arXiv
- Akses
- Open Access ✓