arXiv Open Access 2025

"It's trained by non-disabled people": Evaluating How Image Quality Affects Product Captioning with Vision-Language Models

Kapil Garg Xinru Tang Jimin Heo Dwayne R. Morgan Darren Gergle +2 lainnya
Lihat Sumber

Abstrak

Vision-Language Models (VLMs) are increasingly used by blind and low-vision (BLV) people to identify and understand products in their everyday lives, such as food, personal care items, and household goods. Despite their prevalence, we lack an empirical understanding of how common image quality issues--such as blur, misframing, and rotation--affect the accuracy of VLM-generated captions and whether the resulting captions meet BLV people's information needs. Based on a survey of 86 BLV participants, we develop an annotated dataset of 1,859 product images from BLV people to systematically evaluate how image quality issues affect VLM-generated captions. While the best VLM achieves 98% accuracy on images with no quality issues, accuracy drops to 75% overall when quality issues are present, worsening considerably as issues compound. We discuss the need for model evaluations that center on disabled people's experiences throughout the process and offer concrete recommendations for HCI and ML researchers to make VLMs more reliable for BLV people.

Topik & Kata Kunci

Penulis (7)

K

Kapil Garg

X

Xinru Tang

J

Jimin Heo

D

Dwayne R. Morgan

D

Darren Gergle

E

Erik B. Sudderth

A

Anne Marie Piper

Format Sitasi

Garg, K., Tang, X., Heo, J., Morgan, D.R., Gergle, D., Sudderth, E.B. et al. (2025). "It's trained by non-disabled people": Evaluating How Image Quality Affects Product Captioning with Vision-Language Models. https://arxiv.org/abs/2511.08917

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓