arXiv Open Access 2017

BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages

Benjamin Heinzerling Michael Strube
Lihat Sumber

Abstrak

We present BPEmb, a collection of pre-trained subword unit embeddings in 275 languages, based on Byte-Pair Encoding (BPE). In an evaluation using fine-grained entity typing as testbed, BPEmb performs competitively, and for some languages bet- ter than alternative subword approaches, while requiring vastly fewer resources and no tokenization. BPEmb is available at https://github.com/bheinzerling/bpemb

Topik & Kata Kunci

Penulis (2)

B

Benjamin Heinzerling

M

Michael Strube

Format Sitasi

Heinzerling, B., Strube, M. (2017). BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages. https://arxiv.org/abs/1710.02187

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2017
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓