arXiv Open Access 2022

Emergent Linguistic Structures in Neural Networks are Fragile

Emanuele La Malfa Matthew Wicker Marta Kwiatkowska
Lihat Sumber

Abstrak

Large Language Models (LLMs) have been reported to have strong performance on natural language processing tasks. However, performance metrics such as accuracy do not measure the quality of the model in terms of its ability to robustly represent complex linguistic structures. In this paper, focusing on the ability of language models to represent syntax, we propose a framework to assess the consistency and robustness of linguistic representations. To this end, we introduce measures of robustness of neural network models that leverage recent advances in extracting linguistic constructs from LLMs via probing tasks, i.e., simple tasks used to extract meaningful information about a single facet of a language model, such as syntax reconstruction and root identification. Empirically, we study the performance of four LLMs across six different corpora on the proposed robustness measures by analysing their performance and robustness with respect to syntax-preserving perturbations. We provide evidence that context-free representation (e.g., GloVe) are in some cases competitive with context-dependent representations from modern LLMs (e.g., BERT), yet equally brittle to syntax-preserving perturbations. Our key observation is that emergent syntactic representations in neural networks are brittle. We make the code, trained models and logs available to the community as a contribution to the debate about the capabilities of LLMs.

Topik & Kata Kunci

Penulis (3)

E

Emanuele La Malfa

M

Matthew Wicker

M

Marta Kwiatkowska

Format Sitasi

Malfa, E.L., Wicker, M., Kwiatkowska, M. (2022). Emergent Linguistic Structures in Neural Networks are Fragile. https://arxiv.org/abs/2210.17406

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2022
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓