arXiv Open Access 2023

Unmasking Nationality Bias: A Study of Human Perception of Nationalities in AI-Generated Articles

Pranav Narayanan Venkit Sanjana Gautam Ruchi Panchanadikar Ting-Hao `Kenneth' Huang Shomir Wilson
Lihat Sumber

Abstrak

We investigate the potential for nationality biases in natural language processing (NLP) models using human evaluation methods. Biased NLP models can perpetuate stereotypes and lead to algorithmic discrimination, posing a significant challenge to the fairness and justice of AI systems. Our study employs a two-step mixed-methods approach that includes both quantitative and qualitative analysis to identify and understand the impact of nationality bias in a text generation model. Through our human-centered quantitative analysis, we measure the extent of nationality bias in articles generated by AI sources. We then conduct open-ended interviews with participants, performing qualitative coding and thematic analysis to understand the implications of these biases on human readers. Our findings reveal that biased NLP models tend to replicate and amplify existing societal biases, which can translate to harm if used in a sociotechnical setting. The qualitative analysis from our interviews offers insights into the experience readers have when encountering such articles, highlighting the potential to shift a reader's perception of a country. These findings emphasize the critical role of public perception in shaping AI's impact on society and the need to correct biases in AI systems.

Topik & Kata Kunci

Penulis (5)

P

Pranav Narayanan Venkit

S

Sanjana Gautam

R

Ruchi Panchanadikar

T

Ting-Hao `Kenneth' Huang

S

Shomir Wilson

Format Sitasi

Venkit, P.N., Gautam, S., Panchanadikar, R., Huang, T.`., Wilson, S. (2023). Unmasking Nationality Bias: A Study of Human Perception of Nationalities in AI-Generated Articles. https://arxiv.org/abs/2308.04346

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓