Semantic Scholar Open Access 2019 212 sitasi

Monotonic Infinite Lookback Attention for Simultaneous Machine Translation

N. Arivazhagan Colin Cherry Wolfgang Macherey Chung-Cheng Chiu Semih Yavuz +3 lainnya

Abstrak

Simultaneous machine translation begins to translate each source sentence before the source speaker is finished speaking, with applications to live and streaming scenarios. Simultaneous systems must carefully schedule their reading of the source sentence to balance quality against latency. We present the first simultaneous translation system to learn an adaptive schedule jointly with a neural machine translation (NMT) model that attends over all source tokens read thus far. We do so by introducing Monotonic Infinite Lookback (MILk) attention, which maintains both a hard, monotonic attention head to schedule the reading of the source sentence, and a soft attention head that extends from the monotonic head back to the beginning of the source. We show that MILk’s adaptive schedule allows it to arrive at latency-quality trade-offs that are favorable to those of a recently proposed wait-k strategy for many latency values.

Topik & Kata Kunci

Penulis (8)

N

N. Arivazhagan

C

Colin Cherry

W

Wolfgang Macherey

C

Chung-Cheng Chiu

S

Semih Yavuz

R

Ruoming Pang

W

Wei Li

C

Colin Raffel

Format Sitasi

Arivazhagan, N., Cherry, C., Macherey, W., Chiu, C., Yavuz, S., Pang, R. et al. (2019). Monotonic Infinite Lookback Attention for Simultaneous Machine Translation. https://doi.org/10.18653/v1/P19-1126

Akses Cepat

Lihat di Sumber doi.org/10.18653/v1/P19-1126
Informasi Jurnal
Tahun Terbit
2019
Bahasa
en
Total Sitasi
212×
Sumber Database
Semantic Scholar
DOI
10.18653/v1/P19-1126
Akses
Open Access ✓