arXiv Open Access 2025

EqualizeIR: Mitigating Linguistic Biases in Retrieval Models

Jiali Cheng Hadi Amiri
Lihat Sumber

Abstrak

This study finds that existing information retrieval (IR) models show significant biases based on the linguistic complexity of input queries, performing well on linguistically simpler (or more complex) queries while underperforming on linguistically more complex (or simpler) queries. To address this issue, we propose EqualizeIR, a framework to mitigate linguistic biases in IR models. EqualizeIR uses a linguistically biased weak learner to capture linguistic biases in IR datasets and then trains a robust model by regularizing and refining its predictions using the biased weak learner. This approach effectively prevents the robust model from overfitting to specific linguistic patterns in data. We propose four approaches for developing linguistically-biased models. Extensive experiments on several datasets show that our method reduces performance disparities across linguistically simple and complex queries, while improving overall retrieval performance.

Topik & Kata Kunci

Penulis (2)

J

Jiali Cheng

H

Hadi Amiri

Format Sitasi

Cheng, J., Amiri, H. (2025). EqualizeIR: Mitigating Linguistic Biases in Retrieval Models. https://arxiv.org/abs/2504.07115

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓