DOAJ Open Access 2022

Adapting vs. Pre-training Language Models for Historical Languages

Enrique Manjavacas Lauren Fonteyn

Abstrak

As large language models such as BERT are becoming increasingly popular in Digital Humanities (DH), the question has arisen as to how such models can be made suitable for application to specific textual domains, including that of 'historical text'. Large language models like BERT can be pretrained from scratch on a specific textual domain and achieve strong performance on a series of downstream tasks. However, this is a costly endeavour, both in terms of the computational resources as well as the substantial amounts of training data it requires. An appealing alternative, then, is to employ existing 'general purpose' models (pre-trained on present-day language) and subsequently adapt them to a specific domain by further pre-training. Focusing on the domain of historical text in English, this paper demonstrates that pre-training on domain-specific (i.e. historical) data from scratch yields a generally stronger background model than adapting a present-day language model. We show this on the basis of a variety of downstream tasks, ranging from established tasks such as Part-of-Speech tagging, Named Entity Recognition and Word Sense Disambiguation, to ad-hoc tasks like Sentence Periodization, which are specifically designed to test historically relevant processing.

Penulis (2)

E

Enrique Manjavacas

L

Lauren Fonteyn

Format Sitasi

Manjavacas, E., Fonteyn, L. (2022). Adapting vs. Pre-training Language Models for Historical Languages. https://doi.org/10.46298/jdmdh.9152

Akses Cepat

Lihat di Sumber doi.org/10.46298/jdmdh.9152
Informasi Jurnal
Tahun Terbit
2022
Sumber Database
DOAJ
DOI
10.46298/jdmdh.9152
Akses
Open Access ✓