Semantic Scholar Open Access 2003 205 sitasi

Potential-Based Shaping and Q-Value Initialization are Equivalent

Eric Wiewiora

Abstrak

Shaping has proven to be a powerful but precarious means of improving reinforcement learning performance. Ng, Harada, and Russell (1999) proposed the potential-based shaping algorithm for adding shaping rewards in a way that guarantees the learner will learn optimal behavior. In this note, we prove certain similarities between this shaping algorithm and the initialization step required for several reinforcement learning algorithms. More specifically, we prove that a reinforcement learner with initial Q-values based on the shaping algorithm's potential function make the same updates throughout learning as a learner receiving potential-based shaping rewards. We further prove that under a broad category of policies, the behavior of these two learners are indistinguishable. The comparison provides intuition on the theoretical properties of the shaping algorithm as well as a suggestion for a simpler method for capturing the algorithm's benefit. In addition, the equivalence raises previously unaddressed issues concerning the efficiency of learning with potential-based shaping.

Topik & Kata Kunci

Penulis (1)

E

Eric Wiewiora

Format Sitasi

Wiewiora, E. (2003). Potential-Based Shaping and Q-Value Initialization are Equivalent. https://doi.org/10.1613/jair.1190

Akses Cepat

Lihat di Sumber doi.org/10.1613/jair.1190
Informasi Jurnal
Tahun Terbit
2003
Bahasa
en
Total Sitasi
205×
Sumber Database
Semantic Scholar
DOI
10.1613/jair.1190
Akses
Open Access ✓