arXiv Open Access 2024

RomanSetu: Efficiently unlocking multilingual capabilities of Large Language Models via Romanization

Jaavid Aktar Husain Raj Dabre Aswanth Kumar Jay Gala Thanmay Jayakumar +2 lainnya
Lihat Sumber

Abstrak

This study addresses the challenge of extending Large Language Models (LLMs) to non-English languages that use non-Roman scripts. We propose an approach that utilizes the romanized form of text as an interface for LLMs, hypothesizing that its frequent informal use and shared tokens with English enhance cross-lingual alignment. Our approach involves the continual pretraining of an English LLM like Llama 2 on romanized text of non-English, non-Roman script languages, followed by instruction tuning on romanized data. The results indicate that romanized text not only reduces token fertility by 2x-4x but also matches or outperforms native script representation across various NLU, NLG, and MT tasks. Moreover, the embeddings computed on romanized text exhibit closer alignment with their English translations than those from the native script. Our approach presents a promising direction for leveraging the power of English LLMs in languages traditionally underrepresented in NLP. Our code is available on https://github.com/AI4Bharat/romansetu.

Topik & Kata Kunci

Penulis (7)

J

Jaavid Aktar Husain

R

Raj Dabre

A

Aswanth Kumar

J

Jay Gala

T

Thanmay Jayakumar

R

Ratish Puduppully

A

Anoop Kunchukuttan

Format Sitasi

Husain, J.A., Dabre, R., Kumar, A., Gala, J., Jayakumar, T., Puduppully, R. et al. (2024). RomanSetu: Efficiently unlocking multilingual capabilities of Large Language Models via Romanization. https://arxiv.org/abs/2401.14280

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓