arXiv Open Access 2023

Zero-shot text-to-speech synthesis conditioned using self-supervised speech representation model

Kenichi Fujita Takanori Ashihara Hiroki Kanagawa Takafumi Moriya Yusuke Ijima
Lihat Sumber

Abstrak

This paper proposes a zero-shot text-to-speech (TTS) conditioned by a self-supervised speech-representation model acquired through self-supervised learning (SSL). Conventional methods with embedding vectors from x-vector or global style tokens still have a gap in reproducing the speaker characteristics of unseen speakers. A novel point of the proposed method is the direct use of the SSL model to obtain embedding vectors from speech representations trained with a large amount of data. We also introduce the separate conditioning of acoustic features and a phoneme duration predictor to obtain the disentangled embeddings between rhythm-based speaker characteristics and acoustic-feature-based ones. The disentangled embeddings will enable us to achieve better reproduction performance for unseen speakers and rhythm transfer conditioned by different speeches. Objective and subjective evaluations showed that the proposed method can synthesize speech with improved similarity and achieve speech-rhythm transfer.

Topik & Kata Kunci

Penulis (5)

K

Kenichi Fujita

T

Takanori Ashihara

H

Hiroki Kanagawa

T

Takafumi Moriya

Y

Yusuke Ijima

Format Sitasi

Fujita, K., Ashihara, T., Kanagawa, H., Moriya, T., Ijima, Y. (2023). Zero-shot text-to-speech synthesis conditioned using self-supervised speech representation model. https://arxiv.org/abs/2304.11976

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓