arXiv Open Access 2025

Language Lives in Sparse Dimensions: Toward Interpretable and Efficient Multilingual Control for Large Language Models

Chengzhi Zhong Fei Cheng Qianying Liu Yugo Murawaki Chenhui Chu +1 lainnya
Lihat Sumber

Abstrak

Large language models exhibit strong multilingual capabilities despite limited exposure to non-English data. Prior studies show that English-centric large language models map multilingual content into English-aligned representations at intermediate layers and then project them back into target-language token spaces in the final layer. From this observation, we hypothesize that this cross-lingual transition is governed by a small and sparse set of dimensions, which occur at consistent indices across the intermediate to final layers. Building on this insight, we introduce a simple, training-free method to identify and manipulate these dimensions, requiring only as few as 50 sentences of either parallel or monolingual data. Experiments on a multilingual generation control task reveal the interpretability of these dimensions, demonstrating that the interventions in these dimensions can switch the output language while preserving semantic content, and that it surpasses the performance of prior neuron-based approaches at a substantially lower cost.

Topik & Kata Kunci

Penulis (6)

C

Chengzhi Zhong

F

Fei Cheng

Q

Qianying Liu

Y

Yugo Murawaki

C

Chenhui Chu

S

Sadao Kurohashi

Format Sitasi

Zhong, C., Cheng, F., Liu, Q., Murawaki, Y., Chu, C., Kurohashi, S. (2025). Language Lives in Sparse Dimensions: Toward Interpretable and Efficient Multilingual Control for Large Language Models. https://arxiv.org/abs/2510.07213

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓