arXiv Open Access 2025

DaLA: Danish Linguistic Acceptability Evaluation Guided by Real World Errors

Gianluca Barmina Nathalie Carmen Hau Norman Peter Schneider-Kamp Lukas Galke Poech
Lihat Sumber

Abstrak

We present an enhanced benchmark for evaluating linguistic acceptability in Danish. We first analyze the most common errors found in written Danish. Based on this analysis, we introduce a set of fourteen corruption functions that generate incorrect sentences by systematically introducing errors into existing correct Danish sentences. To ensure the accuracy of these corruptions, we assess their validity using both manual and automatic methods. The results are then used as a benchmark for evaluating Large Language Models on a linguistic acceptability judgement task. Our findings demonstrate that this extension is both broader and more comprehensive than the current state of the art. By incorporating a greater variety of corruption types, our benchmark provides a more rigorous assessment of linguistic acceptability, increasing task difficulty, as evidenced by the lower performance of LLMs on our benchmark compared to existing ones. Our results also suggest that our benchmark has a higher discriminatory power which allows to better distinguish well-performing models from low-performing ones.

Topik & Kata Kunci

Penulis (4)

G

Gianluca Barmina

N

Nathalie Carmen Hau Norman

P

Peter Schneider-Kamp

L

Lukas Galke Poech

Format Sitasi

Barmina, G., Norman, N.C.H., Schneider-Kamp, P., Poech, L.G. (2025). DaLA: Danish Linguistic Acceptability Evaluation Guided by Real World Errors. https://arxiv.org/abs/2512.04799

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓