arXiv Open Access 2024

Self-supervised Speech Representations Still Struggle with African American Vernacular English

Kalvin Chang Yi-Hui Chou Jiatong Shi Hsuan-Ming Chen Nicole Holliday +2 lainnya
Lihat Sumber

Abstrak

Underperformance of ASR systems for speakers of African American Vernacular English (AAVE) and other marginalized language varieties is a well-documented phenomenon, and one that reinforces the stigmatization of these varieties. We investigate whether or not the recent wave of Self-Supervised Learning (SSL) speech models can close the gap in ASR performance between AAVE and Mainstream American English (MAE). We evaluate four SSL models (wav2vec 2.0, HuBERT, WavLM, and XLS-R) on zero-shot Automatic Speech Recognition (ASR) for these two varieties and find that these models perpetuate the bias in performance against AAVE. Additionally, the models have higher word error rates on utterances with more phonological and morphosyntactic features of AAVE. Despite the success of SSL speech models in improving ASR for low resource varieties, SSL pre-training alone may not bridge the gap between AAVE and MAE. Our code is publicly available at https://github.com/cmu-llab/s3m-aave.

Topik & Kata Kunci

Penulis (7)

K

Kalvin Chang

Y

Yi-Hui Chou

J

Jiatong Shi

H

Hsuan-Ming Chen

N

Nicole Holliday

O

Odette Scharenborg

D

David R. Mortensen

Format Sitasi

Chang, K., Chou, Y., Shi, J., Chen, H., Holliday, N., Scharenborg, O. et al. (2024). Self-supervised Speech Representations Still Struggle with African American Vernacular English. https://arxiv.org/abs/2408.14262

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓