Semantic Scholar Open Access 2020 289 sitasi

Byte Pair Encoding is Suboptimal for Language Model Pretraining

Kaj Bostrom Greg Durrett

Abstrak

The success of pretrained transformer language models (LMs) in natural language processing has led to a wide range of pretraining setups. In particular, these models employ a variety of subword tokenization methods, most notably byte-pair encoding (BPE) (Sennrich et al., 2016; Gage, 1994), the WordPiece method (Schuster and Nakajima, 2012), and unigram language modeling (Kudo, 2018), to segment text. However, to the best of our knowledge, the literature does not contain a direct evaluation of the impact of tokenization on language model pretraining. We analyze differences between BPE and unigram LM tokenization, finding that the latter method recovers subword units that align more closely with morphology and avoids problems stemming from BPE’s greedy construction procedure. We then compare the fine-tuned task performance of identical transformer masked language models pretrained with these tokenizations. Across downstream tasks and two languages (English and Japanese), we find that the unigram LM tokenization method matches or outperforms BPE. We hope that developers of future pretrained LMs will consider adopting the unigram LM method over the more prevalent BPE.

Topik & Kata Kunci

Penulis (2)

K

Kaj Bostrom

G

Greg Durrett

Format Sitasi

Bostrom, K., Durrett, G. (2020). Byte Pair Encoding is Suboptimal for Language Model Pretraining. https://doi.org/10.18653/v1/2020.findings-emnlp.414

Akses Cepat

Informasi Jurnal
Tahun Terbit
2020
Bahasa
en
Total Sitasi
289×
Sumber Database
Semantic Scholar
DOI
10.18653/v1/2020.findings-emnlp.414
Akses
Open Access ✓