arXiv Open Access 2024

PeLLE: Encoder-based language models for Brazilian Portuguese based on open data

Guilherme Lamartine de Mello Marcelo Finger and Felipe Serras Miguel de Mello Carpi Marcos Menon Jose +2 lainnya
Lihat Sumber

Abstrak

In this paper we present PeLLE, a family of large language models based on the RoBERTa architecture, for Brazilian Portuguese, trained on curated, open data from the Carolina corpus. Aiming at reproducible results, we describe details of the pretraining of the models. We also evaluate PeLLE models against a set of existing multilingual and PT-BR refined pretrained Transformer-based LLM encoders, contrasting performance of large versus smaller-but-curated pretrained models in several downstream tasks. We conclude that several tasks perform better with larger models, but some tasks benefit from smaller-but-curated data in its pretraining.

Topik & Kata Kunci

Penulis (7)

G

Guilherme Lamartine de Mello

M

Marcelo Finger

a

and Felipe Serras

M

Miguel de Mello Carpi

M

Marcos Menon Jose

P

Pedro Henrique Domingues

P

Paulo Cavalim

Format Sitasi

Mello, G.L.d., Finger, M., Serras, a.F., Carpi, M.d.M., Jose, M.M., Domingues, P.H. et al. (2024). PeLLE: Encoder-based language models for Brazilian Portuguese based on open data. https://arxiv.org/abs/2402.19204

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓