arXiv Open Access 2023

The Curious Decline of Linguistic Diversity: Training Language Models on Synthetic Text

Yanzhu Guo Guokan Shang Michalis Vazirgiannis Chloé Clavel
Lihat Sumber

Abstrak

This study investigates the consequences of training language models on synthetic data generated by their predecessors, an increasingly prevalent practice given the prominence of powerful generative models. Diverging from the usual emphasis on performance metrics, we focus on the impact of this training methodology on linguistic diversity, especially when conducted recursively over time. To assess this, we adapt and develop a set of novel metrics targeting lexical, syntactic, and semantic diversity, applying them in recursive finetuning experiments across various natural language generation tasks in English. Our findings reveal a consistent decrease in the diversity of the model outputs through successive iterations, especially remarkable for tasks demanding high levels of creativity. This trend underscores the potential risks of training language models on synthetic text, particularly concerning the preservation of linguistic richness. Our study highlights the need for careful consideration of the long-term effects of such training approaches on the linguistic capabilities of language models.

Topik & Kata Kunci

Penulis (4)

Y

Yanzhu Guo

G

Guokan Shang

M

Michalis Vazirgiannis

C

Chloé Clavel

Format Sitasi

Guo, Y., Shang, G., Vazirgiannis, M., Clavel, C. (2023). The Curious Decline of Linguistic Diversity: Training Language Models on Synthetic Text. https://arxiv.org/abs/2311.09807

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓