arXiv Open Access 2024

LLäMmlein: Transparent, Compact and Competitive German-Only Language Models from Scratch

Jan Pfister Julia Wunderle Andreas Hotho
Lihat Sumber

Abstrak

We create two German-only decoder models, LLäMmlein 120M and 1B, transparently from scratch and publish them, along with the training data, for the German NLP research community to use. The model training involved several key steps, including extensive data preprocessing, the creation of a custom German tokenizer, the training itself, as well as the evaluation of the final models on various benchmarks. Throughout the training process, multiple checkpoints were saved and analyzed using the SuperGLEBer benchmark to monitor the models' learning dynamics. Compared to state-of-the-art models on the SuperGLEBer benchmark, both LLäMmlein models performed competitively, consistently matching or surpassing models with similar parameter sizes. The results show that the models' quality scales with size as expected, but performance improvements on some tasks plateaued early, offering valuable insights into resource allocation for future model development.

Topik & Kata Kunci

Penulis (3)

J

Jan Pfister

J

Julia Wunderle

A

Andreas Hotho

Format Sitasi

Pfister, J., Wunderle, J., Hotho, A. (2024). LLäMmlein: Transparent, Compact and Competitive German-Only Language Models from Scratch. https://arxiv.org/abs/2411.11171

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓