arXiv Open Access 2025

Artificial Intelligence Bias on English Language Learners in Automatic Scoring

Shuchen Guo Yun Wang Jichao Yu Xuansheng Wu Bilgehan Ayik +5 lainnya
Lihat Sumber

Abstrak

This study investigated potential scoring biases and disparities toward English Language Learners (ELLs) when using automatic scoring systems for middle school students' written responses to science assessments. We specifically focus on examining how unbalanced training data with ELLs contributes to scoring bias and disparities. We fine-tuned BERT with four datasets: responses from (1) ELLs, (2) non-ELLs, (3) a mixed dataset reflecting the real-world proportion of ELLs and non-ELLs (unbalanced), and (4) a balanced mixed dataset with equal representation of both groups. The study analyzed 21 assessment items: 10 items with about 30,000 ELL responses, five items with about 1,000 ELL responses, and six items with about 200 ELL responses. Scoring accuracy (Acc) was calculated and compared to identify bias using Friedman tests. We measured the Mean Score Gaps (MSGs) between ELLs and non-ELLs and then calculated the differences in MSGs generated through both the human and AI models to identify the scoring disparities. We found that no AI bias and distorted disparities between ELLs and non-ELLs were found when the training dataset was large enough (ELL = 30,000 and ELL = 1,000), but concerns could exist if the sample size is limited (ELL = 200).

Topik & Kata Kunci

Penulis (10)

S

Shuchen Guo

Y

Yun Wang

J

Jichao Yu

X

Xuansheng Wu

B

Bilgehan Ayik

F

Field M. Watts

E

Ehsan Latif

N

Ninghao Liu

L

Lei Liu

X

Xiaoming Zhai

Format Sitasi

Guo, S., Wang, Y., Yu, J., Wu, X., Ayik, B., Watts, F.M. et al. (2025). Artificial Intelligence Bias on English Language Learners in Automatic Scoring. https://arxiv.org/abs/2505.10643

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓