arXiv Open Access 2026

Responsible Intelligence in Practice: A Fairness Audit of Open Large Language Models for Library Reference Services

Haining Wang Jason Clark Angelica Peña
Lihat Sumber

Abstrak

As libraries explore large language models (LLMs) as a scalable layer for reference services, a core fairness question follows: can LLM-based services support all patrons fairly, regardless of demographic identity? While LLMs offer great potential for broadening access to information assistance, they may also reproduce societal biases embedded in their training data, potentially undermining libraries' commitments to impartial service. In this chapter, we apply a systematic evaluation approach that combines diagnostic classification to detect systematic differences with linguistic analysis to interpret their sources. Across three widely used open models (Llama-3.1 8B, Gemma-2 9B, and Ministral 8B), we find no compelling evidence of systematic differentiation by race/ethnicity, and only minor evidence of sex-linked differentiation in one model. We discuss implications for responsible AI adoption in libraries and the importance of ongoing monitoring in aligning LLM-based services with core professional values.

Topik & Kata Kunci

Penulis (3)

H

Haining Wang

J

Jason Clark

A

Angelica Peña

Format Sitasi

Wang, H., Clark, J., Peña, A. (2026). Responsible Intelligence in Practice: A Fairness Audit of Open Large Language Models for Library Reference Services. https://arxiv.org/abs/2602.18935

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓