arXiv Open Access 2023

Transformer Working Memory Enables Regular Language Reasoning and Natural Language Length Extrapolation

Ta-Chung Chi Ting-Han Fan Alexander I. Rudnicky Peter J. Ramadge
Lihat Sumber

Abstrak

Unlike recurrent models, conventional wisdom has it that Transformers cannot perfectly model regular languages. Inspired by the notion of working memory, we propose a new Transformer variant named RegularGPT. With its novel combination of Weight-Sharing, Adaptive-Depth, and Sliding-Dilated-Attention, RegularGPT constructs working memory along the depth dimension, thereby enabling efficient and successful modeling of regular languages such as PARITY. We further test RegularGPT on the task of natural language length extrapolation and surprisingly find that it rediscovers the local windowed attention effect deemed necessary in prior work for length extrapolation.

Topik & Kata Kunci

Penulis (4)

T

Ta-Chung Chi

T

Ting-Han Fan

A

Alexander I. Rudnicky

P

Peter J. Ramadge

Format Sitasi

Chi, T., Fan, T., Rudnicky, A.I., Ramadge, P.J. (2023). Transformer Working Memory Enables Regular Language Reasoning and Natural Language Length Extrapolation. https://arxiv.org/abs/2305.03796

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓