arXiv Open Access 2018

On learning an interpreted language with recurrent models

Denis Paperno
Lihat Sumber

Abstrak

Can recurrent neural nets, inspired by human sequential data processing, learn to understand language? We construct simplified datasets reflecting core properties of natural language as modeled in formal syntax and semantics: recursive syntactic structure and compositionality. We find LSTM and GRU networks to generalise to compositional interpretation well, but only in the most favorable learning settings, with a well-paced curriculum, extensive training data, and left-to-right (but not right-to-left) composition.

Topik & Kata Kunci

Penulis (1)

D

Denis Paperno

Format Sitasi

Paperno, D. (2018). On learning an interpreted language with recurrent models. https://arxiv.org/abs/1809.04128

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2018
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓