arXiv Open Access 2026

Improving Training Efficiency and Reducing Maintenance Costs via Language Specific Model Merging

Alphaeus Dmonte Vidhi Gupta Daniel J Perry Mark Arehart
Lihat Sumber

Abstrak

Fine-tuning a task-specific multilingual large language model (LLM) involves training the model on a multilingual dataset with examples in all the required languages. Updating one or more supported languages with additional data or adding support for a new language involves retraining the model, which can be computationally inefficient and creates a severe maintenance bottleneck. Recent research on merging multilingual multitask models has shown promise in terms of improved quality, but its computational and maintenance efficiency remains unstudied. In this work, we provide the first focused analysis of this merging strategy from an efficiency perspective, evaluating it across three independent tasks. We demonstrate significant efficiency gains while maintaining parity in terms of quality: this merging approach reduces the initial training time by up to 50\%. We also demonstrate that updating an individual language and re-merging as part of model maintenance reduces training costs by more than 60\%, compared to re-training the full multilingual model. We show this on both public and proprietary industry datasets confirming that the approach works well for industrial use cases in addition to academic settings already studied in previous work.

Topik & Kata Kunci

Penulis (4)

A

Alphaeus Dmonte

V

Vidhi Gupta

D

Daniel J Perry

M

Mark Arehart

Format Sitasi

Dmonte, A., Gupta, V., Perry, D.J., Arehart, M. (2026). Improving Training Efficiency and Reducing Maintenance Costs via Language Specific Model Merging. https://arxiv.org/abs/2601.16127

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓