arXiv Open Access 2020

German's Next Language Model

Branden Chan Stefan Schweter Timo Möller
Lihat Sumber

Abstrak

In this work we present the experiments which lead to the creation of our BERT and ELECTRA based German language models, GBERT and GELECTRA. By varying the input training data, model size, and the presence of Whole Word Masking (WWM) we were able to attain SoTA performance across a set of document classification and named entity recognition (NER) tasks for both models of base and large size. We adopt an evaluation driven approach in training these models and our results indicate that both adding more data and utilizing WWM improve model performance. By benchmarking against existing German models, we show that these models are the best German models to date. Our trained models will be made publicly available to the research community.

Topik & Kata Kunci

Penulis (3)

B

Branden Chan

S

Stefan Schweter

T

Timo Möller

Format Sitasi

Chan, B., Schweter, S., Möller, T. (2020). German's Next Language Model. https://arxiv.org/abs/2010.10906

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2020
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓