arXiv Open Access 2026

Merge and Conquer: Instructing Multilingual Models by Adding Target Language Weights

Eneko Valero Maria Ribalta i Albado Oscar Sainz Naiara Perez German Rigau
Lihat Sumber

Abstrak

Large Language Models (LLMs) remain heavily centered on English, with limited performance in low-resource languages. Existing adaptation approaches, such as continual pre-training, demand significant computational resources. In the case of instructed models, high-quality instruction data is also required, both of which are often inaccessible for low-resource language communities. Under these constraints, model merging offers a lightweight alternative, but its potential in low-resource contexts has not been systematically explored. In this work, we explore whether it is possible to transfer language knowledge to an instruction-tuned LLM by merging it with a language-specific base model, thereby eliminating the need of language-specific instructions and repeated fine-tuning processes whenever stronger instructed variants become available. Through experiments covering four Iberian languages (Basque, Catalan, Galician, and Spanish) and two model families, we show that merging enables effective instruction following behavior in new languages and even supports multilingual capability through the combination of multiple language-specific models. Our results indicate that model merging is a viable and efficient alternative to traditional adaptation methods for low-resource languages, achieving competitive performance while greatly reducing computational cost.

Topik & Kata Kunci

Penulis (5)

E

Eneko Valero

M

Maria Ribalta i Albado

O

Oscar Sainz

N

Naiara Perez

G

German Rigau

Format Sitasi

Valero, E., Albado, M.R.i., Sainz, O., Perez, N., Rigau, G. (2026). Merge and Conquer: Instructing Multilingual Models by Adding Target Language Weights. https://arxiv.org/abs/2603.28263

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓