arXiv Open Access 2020

Recurrent Neural Network Language Models Always Learn English-Like Relative Clause Attachment

Forrest Davis Marten van Schijndel
Lihat Sumber

Abstrak

A standard approach to evaluating language models analyzes how models assign probabilities to valid versus invalid syntactic constructions (i.e. is a grammatical sentence more probable than an ungrammatical sentence). Our work uses ambiguous relative clause attachment to extend such evaluations to cases of multiple simultaneous valid interpretations, where stark grammaticality differences are absent. We compare model performance in English and Spanish to show that non-linguistic biases in RNN LMs advantageously overlap with syntactic structure in English but not Spanish. Thus, English models may appear to acquire human-like syntactic preferences, while models trained on Spanish fail to acquire comparable human-like preferences. We conclude by relating these results to broader concerns about the relationship between comprehension (i.e. typical language model use cases) and production (which generates the training data for language models), suggesting that necessary linguistic biases are not present in the training signal at all.

Topik & Kata Kunci

Penulis (2)

F

Forrest Davis

M

Marten van Schijndel

Format Sitasi

Davis, F., Schijndel, M.v. (2020). Recurrent Neural Network Language Models Always Learn English-Like Relative Clause Attachment. https://arxiv.org/abs/2005.00165

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2020
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓