arXiv Open Access 2024

SWEb: A Large Web Dataset for the Scandinavian Languages

Tobias Norlund Tim Isbister Amaru Cuba Gyllensten Paul Dos Santos Danila Petrelli +2 lainnya
Lihat Sumber

Abstrak

This paper presents the hitherto largest pretraining dataset for the Scandinavian languages: the Scandinavian WEb (SWEb), comprising over one trillion tokens. The paper details the collection and processing pipeline, and introduces a novel model-based text extractor that significantly reduces complexity in comparison with rule-based approaches. We also introduce a new cloze-style benchmark for evaluating language models in Swedish, and use this test to compare models trained on the SWEb data to models trained on FineWeb, with competitive results. All data, models and code are shared openly.

Topik & Kata Kunci

Penulis (7)

T

Tobias Norlund

T

Tim Isbister

A

Amaru Cuba Gyllensten

P

Paul Dos Santos

D

Danila Petrelli

A

Ariel Ekgren

M

Magnus Sahlgren

Format Sitasi

Norlund, T., Isbister, T., Gyllensten, A.C., Santos, P.D., Petrelli, D., Ekgren, A. et al. (2024). SWEb: A Large Web Dataset for the Scandinavian Languages. https://arxiv.org/abs/2410.04456

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓