arXiv Open Access 2024

SailCompass: Towards Reproducible and Robust Evaluation for Southeast Asian Languages

Jia Guo Longxu Dou Guangtao Zeng Stanley Kok Wei Lu +1 lainnya
Lihat Sumber

Abstrak

In this paper, we introduce SailCompass, a reproducible and robust evaluation benchmark for assessing Large Language Models (LLMs) on Southeast Asian Languages (SEA). SailCompass encompasses three main SEA languages, eight primary tasks including 14 datasets covering three task types (generation, multiple-choice questions, and classification). To improve the robustness of the evaluation approach, we explore different prompt configurations for multiple-choice questions and leverage calibrations to improve the faithfulness of classification tasks. With SailCompass, we derive the following findings: (1) SEA-specialized LLMs still outperform general LLMs, although the gap has narrowed; (2) A balanced language distribution is important for developing better SEA-specialized LLMs; (3) Advanced prompting techniques (e.g., calibration, perplexity-based ranking) are necessary to better utilize LLMs. All datasets and evaluation scripts are public.

Topik & Kata Kunci

Penulis (6)

J

Jia Guo

L

Longxu Dou

G

Guangtao Zeng

S

Stanley Kok

W

Wei Lu

Q

Qian Liu

Format Sitasi

Guo, J., Dou, L., Zeng, G., Kok, S., Lu, W., Liu, Q. (2024). SailCompass: Towards Reproducible and Robust Evaluation for Southeast Asian Languages. https://arxiv.org/abs/2412.01186

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓