arXiv Open Access 2025

Low-Resource Transliteration for Roman-Urdu and Urdu Using Transformer-Based Models

Umer Butt Stalin Veranasi Günter Neumann
Lihat Sumber

Abstrak

As the Information Retrieval (IR) field increasingly recognizes the importance of inclusivity, addressing the needs of low-resource languages remains a significant challenge. Transliteration between Urdu and its Romanized form, Roman Urdu, remains underexplored despite the widespread use of both scripts in South Asia. Prior work using RNNs on the Roman-Urdu-Parl dataset showed promising results but suffered from poor domain adaptability and limited evaluation. We propose a transformer-based approach using the m2m100 multilingual translation model, enhanced with masked language modeling (MLM) pretraining and fine-tuning on both Roman-Urdu-Parl and the domain-diverse Dakshina dataset. To address previous evaluation flaws, we introduce rigorous dataset splits and assess performance using BLEU, character-level BLEU, and CHRF. Our model achieves strong transliteration performance, with Char-BLEU scores of 96.37 for Urdu->Roman-Urdu and 97.44 for Roman-Urdu->Urdu. These results outperform both RNN baselines and GPT-4o Mini and demonstrate the effectiveness of multilingual transfer learning for low-resource transliteration tasks.

Topik & Kata Kunci

Penulis (3)

U

Umer Butt

S

Stalin Veranasi

G

Günter Neumann

Format Sitasi

Butt, U., Veranasi, S., Neumann, G. (2025). Low-Resource Transliteration for Roman-Urdu and Urdu Using Transformer-Based Models. https://arxiv.org/abs/2503.21530

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓