arXiv Open Access 2025

An Empirical Analysis of Discrete Unit Representations in Speech Language Modeling Pre-training

Yanis Labrak Richard Dufour Mickaël Rouvier
Lihat Sumber

Abstrak

This paper investigates discrete unit representations in Speech Language Models (SLMs), focusing on optimizing speech modeling during continual pre-training. In this paper, we systematically examine how model architecture, data representation, and training robustness influence the pre-training stage in which we adapt existing pre-trained language models to the speech modality. Our experiments highlight the role of speech encoders and clustering granularity across different model scales, showing how optimal discretization strategies vary with model capacity. By examining cluster distribution and phonemic alignments, we investigate the effective use of discrete vocabulary, uncovering both linguistic and paralinguistic patterns. Additionally, we explore the impact of clustering data selection on model robustness, highlighting the importance of domain matching between discretization training and target applications.

Topik & Kata Kunci

Penulis (3)

Y

Yanis Labrak

R

Richard Dufour

M

Mickaël Rouvier

Format Sitasi

Labrak, Y., Dufour, R., Rouvier, M. (2025). An Empirical Analysis of Discrete Unit Representations in Speech Language Modeling Pre-training. https://arxiv.org/abs/2509.05359

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓