arXiv Open Access 2024

Style Mixture of Experts for Expressive Text-To-Speech Synthesis

Ahad Jawaid Shreeram Suresh Chandra Junchen Lu Berrak Sisman
Lihat Sumber

Abstrak

Recent advances in style transfer text-to-speech (TTS) have improved the expressiveness of synthesized speech. However, encoding stylistic information (e.g., timbre, emotion, and prosody) from diverse and unseen reference speech remains a challenge. This paper introduces StyleMoE, an approach that addresses the issue of learning averaged style representations in the style encoder by creating style experts that learn from subsets of data. The proposed method replaces the style encoder in a TTS framework with a Mixture of Experts (MoE) layer. The style experts specialize by learning from subsets of reference speech routed to them by the gating network, enabling them to handle different aspects of the style space. As a result, StyleMoE improves the style coverage of the style encoder for style transfer TTS. Our experiments, both objective and subjective, demonstrate improved style transfer for diverse and unseen reference speech. The proposed method enhances the performance of existing state-of-the-art style transfer TTS models and represents the first study of style MoE in TTS.

Penulis (4)

A

Ahad Jawaid

S

Shreeram Suresh Chandra

J

Junchen Lu

B

Berrak Sisman

Format Sitasi

Jawaid, A., Chandra, S.S., Lu, J., Sisman, B. (2024). Style Mixture of Experts for Expressive Text-To-Speech Synthesis. https://arxiv.org/abs/2406.03637

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓