arXiv Open Access 2024

Dallah: A Dialect-Aware Multimodal Large Language Model for Arabic

Fakhraddin Alwajih Gagan Bhatia Muhammad Abdul-Mageed
Lihat Sumber

Abstrak

Recent advancements have significantly enhanced the capabilities of Multimodal Large Language Models (MLLMs) in generating and understanding image-to-text content. Despite these successes, progress is predominantly limited to English due to the scarcity of high quality multimodal resources in other languages. This limitation impedes the development of competitive models in languages such as Arabic. To alleviate this situation, we introduce an efficient Arabic multimodal assistant, dubbed Dallah, that utilizes an advanced language model based on LLaMA-2 to facilitate multimodal interactions. Dallah demonstrates state-of-the-art performance in Arabic MLLMs. Through fine-tuning six Arabic dialects, Dallah showcases its capability to handle complex dialectal interactions incorporating both textual and visual elements. The model excels in two benchmark tests: one evaluating its performance on Modern Standard Arabic (MSA) and another specifically designed to assess dialectal responses. Beyond its robust performance in multimodal interaction tasks, Dallah has the potential to pave the way for further development of dialect-aware Arabic MLLMs.

Topik & Kata Kunci

Penulis (3)

F

Fakhraddin Alwajih

G

Gagan Bhatia

M

Muhammad Abdul-Mageed

Format Sitasi

Alwajih, F., Bhatia, G., Abdul-Mageed, M. (2024). Dallah: A Dialect-Aware Multimodal Large Language Model for Arabic. https://arxiv.org/abs/2407.18129

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓