arXiv Open Access 2022

BERTIN: Efficient Pre-Training of a Spanish Language Model using Perplexity Sampling

Javier de la Rosa Eduardo G. Ponferrada Paulo Villegas Pablo Gonzalez de Prado Salas Manu Romero +1 lainnya
Lihat Sumber

Abstrak

The pre-training of large language models usually requires massive amounts of resources, both in terms of computation and data. Frequently used web sources such as Common Crawl might contain enough noise to make this pre-training sub-optimal. In this work, we experiment with different sampling methods from the Spanish version of mC4, and present a novel data-centric technique which we name $\textit{perplexity sampling}$ that enables the pre-training of language models in roughly half the amount of steps and using one fifth of the data. The resulting models are comparable to the current state-of-the-art, and even achieve better results for certain tasks. Our work is proof of the versatility of Transformers, and paves the way for small teams to train their models on a limited budget. Our models are available at this $\href{https://huggingface.co/bertin-project}{URL}$.

Topik & Kata Kunci

Penulis (6)

J

Javier de la Rosa

E

Eduardo G. Ponferrada

P

Paulo Villegas

P

Pablo Gonzalez de Prado Salas

M

Manu Romero

M

Marıa Grandury

Format Sitasi

Rosa, J.d.l., Ponferrada, E.G., Villegas, P., Salas, P.G.d.P., Romero, M., Grandury, M. (2022). BERTIN: Efficient Pre-Training of a Spanish Language Model using Perplexity Sampling. https://arxiv.org/abs/2207.06814

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2022
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓