arXiv Open Access 2023

Leveraging Language ID to Calculate Intermediate CTC Loss for Enhanced Code-Switching Speech Recognition

Tzu-Ting Yang Hsin-Wei Wang Berlin Chen
Lihat Sumber

Abstrak

In recent years, end-to-end speech recognition has emerged as a technology that integrates the acoustic, pronunciation dictionary, and language model components of the traditional Automatic Speech Recognition model. It is possible to achieve human-like recognition without the need to build a pronunciation dictionary in advance. However, due to the relative scarcity of training data on code-switching, the performance of ASR models tends to degrade drastically when encountering this phenomenon. Most past studies have simplified the learning complexity of the model by splitting the code-switching task into multiple tasks dealing with a single language and then learning the domain-specific knowledge of each language separately. Therefore, in this paper, we attempt to introduce language identification information into the middle layer of the ASR model's encoder. We aim to generate acoustic features that imply language distinctions in a more implicit way, reducing the model's confusion when dealing with language switching.

Topik & Kata Kunci

Penulis (3)

T

Tzu-Ting Yang

H

Hsin-Wei Wang

B

Berlin Chen

Format Sitasi

Yang, T., Wang, H., Chen, B. (2023). Leveraging Language ID to Calculate Intermediate CTC Loss for Enhanced Code-Switching Speech Recognition. https://arxiv.org/abs/2312.09583

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓