arXiv Open Access 2024

PersianMind: A Cross-Lingual Persian-English Large Language Model

Pedram Rostami Ali Salemi Mohammad Javad Dousti
Lihat Sumber

Abstrak

Large language models demonstrate remarkable proficiency in various linguistic tasks and have extensive knowledge across various domains. Although they perform best in English, their ability in other languages is notable too. In contrast, open-source models, such as LLaMa, are primarily trained on English datasets, resulting in poor performance in non-English languages. In this paper, we introduce PersianMind, an open-source bilingual large language model which demonstrates comparable performance to closed-source GPT-3.5-turbo in the Persian language. By expanding LLaMa2's vocabulary with 10,000 Persian tokens and training it on a dataset comprising nearly 2 billion Persian tokens, we show that our approach preserves the model's English knowledge and employs transfer learning to excel at transferring task knowledge from one language to another.

Topik & Kata Kunci

Penulis (3)

P

Pedram Rostami

A

Ali Salemi

M

Mohammad Javad Dousti

Format Sitasi

Rostami, P., Salemi, A., Dousti, M.J. (2024). PersianMind: A Cross-Lingual Persian-English Large Language Model. https://arxiv.org/abs/2401.06466

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓