arXiv Open Access 2025

Analyzing the Inner Workings of Transformers in Compositional Generalization

Ryoma Kumon Hitomi Yanaka
Lihat Sumber

Abstrak

The compositional generalization abilities of neural models have been sought after for human-like linguistic competence. The popular method to evaluate such abilities is to assess the models' input-output behavior. However, that does not reveal the internal mechanisms, and the underlying competence of such models in compositional generalization remains unclear. To address this problem, we explore the inner workings of a Transformer model by finding an existing subnetwork that contributes to the generalization performance and by performing causal analyses on how the model utilizes syntactic features. We find that the model depends on syntactic features to output the correct answer, but that the subnetwork with much better generalization performance than the whole model relies on a non-compositional algorithm in addition to the syntactic features. We also show that the subnetwork improves its generalization performance relatively slowly during the training compared to the in-distribution one, and the non-compositional solution is acquired in the early stages of the training.

Topik & Kata Kunci

Penulis (2)

R

Ryoma Kumon

H

Hitomi Yanaka

Format Sitasi

Kumon, R., Yanaka, H. (2025). Analyzing the Inner Workings of Transformers in Compositional Generalization. https://arxiv.org/abs/2502.15277

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓