arXiv Open Access 2025

Generic Speech Enhancement with Self-Supervised Representation Space Loss

Hiroshi Sato Tsubasa Ochiai Marc Delcroix Takafumi Moriya Takanori Ashihara +1 lainnya
Lihat Sumber

Abstrak

Single-channel speech enhancement is utilized in various tasks to mitigate the effect of interfering signals. Conventionally, to ensure the speech enhancement performs optimally, the speech enhancement has needed to be tuned for each task. Thus, generalizing speech enhancement models to unknown downstream tasks has been challenging. This study aims to construct a generic speech enhancement front-end that can improve the performance of back-ends to solve multiple downstream tasks. To this end, we propose a novel training criterion that minimizes the distance between the enhanced and the ground truth clean signal in the feature representation domain of self-supervised learning models. Since self-supervised learning feature representations effectively express high-level speech information useful for solving various downstream tasks, the proposal is expected to make speech enhancement models preserve such information. Experimental validation demonstrates that the proposal improves the performance of multiple speech tasks while maintaining the perceptual quality of the enhanced signal.

Topik & Kata Kunci

Penulis (6)

H

Hiroshi Sato

T

Tsubasa Ochiai

M

Marc Delcroix

T

Takafumi Moriya

T

Takanori Ashihara

R

Ryo Masumura

Format Sitasi

Sato, H., Ochiai, T., Delcroix, M., Moriya, T., Ashihara, T., Masumura, R. (2025). Generic Speech Enhancement with Self-Supervised Representation Space Loss. https://arxiv.org/abs/2507.07631

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓