Semantic Scholar Open Access 2024 251 sitasi

Do Llamas Work in English? On the Latent Language of Multilingual Transformers

Chris Wendler Veniamin Veselovsky Giovanni Monea Robert West

Abstrak

We ask whether multilingual language models trained on unbalanced, English-dominated corpora use English as an internal pivot language -- a question of key importance for understanding how language models function and the origins of linguistic bias. Focusing on the Llama-2 family of transformer models, our study uses carefully constructed non-English prompts with a unique correct single-token continuation. From layer to layer, transformers gradually map an input embedding of the final prompt token to an output embedding from which next-token probabilities are computed. Tracking intermediate embeddings through their high-dimensional space reveals three distinct phases, whereby intermediate embeddings (1) start far away from output token embeddings; (2) already allow for decoding a semantically correct next token in the middle layers, but give higher probability to its version in English than in the input language; (3) finally move into an input-language-specific region of the embedding space. We cast these results into a conceptual model where the three phases operate in"input space","concept space", and"output space", respectively. Crucially, our evidence suggests that the abstract"concept space"lies closer to English than to other languages, which may have important consequences regarding the biases held by multilingual language models.

Topik & Kata Kunci

Penulis (4)

C

Chris Wendler

V

Veniamin Veselovsky

G

Giovanni Monea

R

Robert West

Format Sitasi

Wendler, C., Veselovsky, V., Monea, G., West, R. (2024). Do Llamas Work in English? On the Latent Language of Multilingual Transformers. https://doi.org/10.48550/arXiv.2402.10588

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.48550/arXiv.2402.10588
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Total Sitasi
251×
Sumber Database
Semantic Scholar
DOI
10.48550/arXiv.2402.10588
Akses
Open Access ✓