DOAJ Open Access 2022

Dependency Parsing with Backtracking using Deep Reinforcement Learning

Franck Dary Maxime Petit Alexis Nasr

Abstrak

AbstractGreedy algorithms for NLP such as transition-based parsing are prone to error propagation. One way to overcome this problem is to allow the algorithm to backtrack and explore an alternative solution in cases where new evidence contradicts the solution explored so far. In order to implement such a behavior, we use reinforcement learning and let the algorithm backtrack in cases where such an action gets a better reward than continuing to explore the current solution. We test this idea on both POS tagging and dependency parsing and show that backtracking is an effective means to fight against error propagation.

Penulis (3)

F

Franck Dary

M

Maxime Petit

A

Alexis Nasr

Format Sitasi

Dary, F., Petit, M., Nasr, A. (2022). Dependency Parsing with Backtracking using Deep Reinforcement Learning. https://doi.org/10.1162/tacl_a_00496

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.1162/tacl_a_00496
Informasi Jurnal
Tahun Terbit
2022
Sumber Database
DOAJ
DOI
10.1162/tacl_a_00496
Akses
Open Access ✓