arXiv Open Access 2024

From English-Centric to Effective Bilingual: LLMs with Custom Tokenizers for Underrepresented Languages

Artur Kiulian Anton Polishko Mykola Khandoga Yevhen Kostiuk Guillermo Gabrielli +8 lainnya
Lihat Sumber

Abstrak

In this paper, we propose a model-agnostic cost-effective approach to developing bilingual base large language models (LLMs) to support English and any target language. The method includes vocabulary expansion, initialization of new embeddings, model training and evaluation. We performed our experiments with three languages, each using a non-Latin script - Ukrainian, Arabic, and Georgian. Our approach demonstrates improved language performance while reducing computational costs. It mitigates the disproportionate penalization of underrepresented languages, promoting fairness and minimizing adverse phenomena such as code-switching and broken grammar. Additionally, we introduce new metrics to evaluate language quality, revealing that vocabulary size significantly impacts the quality of generated text.

Topik & Kata Kunci

Penulis (13)

A

Artur Kiulian

A

Anton Polishko

M

Mykola Khandoga

Y

Yevhen Kostiuk

G

Guillermo Gabrielli

Ł

Łukasz Gagała

F

Fadi Zaraket

Q

Qusai Abu Obaida

H

Hrishikesh Garud

W

Wendy Wing Yee Mak

D

Dmytro Chaplynskyi

S

Selma Belhadj Amor

G

Grigol Peradze

Format Sitasi

Kiulian, A., Polishko, A., Khandoga, M., Kostiuk, Y., Gabrielli, G., Gagała, Ł. et al. (2024). From English-Centric to Effective Bilingual: LLMs with Custom Tokenizers for Underrepresented Languages. https://arxiv.org/abs/2410.18836

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓