arXiv Open Access 2025

Bias Beyond English: Evaluating Social Bias and Debiasing Methods in a Low-Resource Setting

Ej Zhou Weiming Lu
Lihat Sumber

Abstrak

Social bias in language models can potentially exacerbate social inequalities. Despite it having garnered wide attention, most research focuses on English data. In a low-resource scenario, the models often perform worse due to insufficient training data. This study aims to leverage high-resource language corpora to evaluate bias and experiment with debiasing methods in low-resource languages. We evaluated the performance of recent multilingual models in five languages: English, Chinese, Russian, Indonesian and Thai, and analyzed four bias dimensions: gender, religion, nationality, and race-color. By constructing multilingual bias evaluation datasets, this study allows fair comparisons between models across languages. We have further investigated three debiasing methods-CDA, Dropout, SenDeb-and demonstrated that debiasing methods from high-resource languages can be effectively transferred to low-resource ones, providing actionable insights for fairness research in multilingual NLP.

Topik & Kata Kunci

Penulis (2)

E

Ej Zhou

W

Weiming Lu

Format Sitasi

Zhou, E., Lu, W. (2025). Bias Beyond English: Evaluating Social Bias and Debiasing Methods in a Low-Resource Setting. https://arxiv.org/abs/2504.11183

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓