arXiv Open Access 2026

Chitrakshara: A Large Multilingual Multimodal Dataset for Indian languages

Shaharukh Khan Ali Faraz Abhinav Ravi Mohd Nauman Mohd Sarfraz +4 lainnya
Lihat Sumber

Abstrak

Multimodal research has predominantly focused on single-image reasoning, with limited exploration of multi-image scenarios. Recent models have sought to enhance multi-image understanding through large-scale pretraining on interleaved image-text datasets. However, most Vision-Language Models (VLMs) are trained primarily on English datasets, leading to inadequate representation of Indian languages. To address this gap, we introduce the Chitrakshara dataset series, covering 11 Indian languages sourced from Common Crawl. It comprises (1) Chitrakshara-IL, a large-scale interleaved pretraining dataset with 193M images, 30B text tokens, and 50M multilingual documents, and (2) Chitrakshara-Cap, which includes 44M image-text pairs with 733M tokens. This paper details the data collection pipeline, including curation, filtering, and processing methodologies. Additionally, we present a comprehensive quality and diversity analysis to assess the dataset's representativeness across Indic languages and its potential for developing more culturally inclusive VLMs.

Topik & Kata Kunci

Penulis (9)

S

Shaharukh Khan

A

Ali Faraz

A

Abhinav Ravi

M

Mohd Nauman

M

Mohd Sarfraz

A

Akshat Patidar

R

Raja Kolla

C

Chandra Khatri

S

Shubham Agarwal

Format Sitasi

Khan, S., Faraz, A., Ravi, A., Nauman, M., Sarfraz, M., Patidar, A. et al. (2026). Chitrakshara: A Large Multilingual Multimodal Dataset for Indian languages. https://arxiv.org/abs/2603.23521

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓