arXiv Open Access 2021

Linguistic Dependencies and Statistical Dependence

Jacob Louis Hoover Alessandro Sordoni Wenyu Du Timothy J. O'Donnell
Lihat Sumber

Abstrak

Are pairs of words that tend to occur together also likely to stand in a linguistic dependency? This empirical question is motivated by a long history of literature in cognitive science, psycholinguistics, and NLP. In this work we contribute an extensive analysis of the relationship between linguistic dependencies and statistical dependence between words. Improving on previous work, we introduce the use of large pretrained language models to compute contextualized estimates of the pointwise mutual information between words (CPMI). For multiple models and languages, we extract dependency trees which maximize CPMI, and compare to gold standard linguistic dependencies. Overall, we find that CPMI dependencies achieve an unlabelled undirected attachment score of at most $\approx 0.5$. While far above chance, and consistently above a non-contextualized PMI baseline, this score is generally comparable to a simple baseline formed by connecting adjacent words. We analyze which kinds of linguistic dependencies are best captured in CPMI dependencies, and also find marked differences between the estimates of the large pretrained language models, illustrating how their different training schemes affect the type of dependencies they capture.

Topik & Kata Kunci

Penulis (4)

J

Jacob Louis Hoover

A

Alessandro Sordoni

W

Wenyu Du

T

Timothy J. O'Donnell

Format Sitasi

Hoover, J.L., Sordoni, A., Du, W., O'Donnell, T.J. (2021). Linguistic Dependencies and Statistical Dependence. https://arxiv.org/abs/2104.08685

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2021
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓