arXiv Open Access 2025

Self Speculative Decoding for Diffusion Large Language Models

Yifeng Gao Ziang Ji Yuxuan Wang Biqing Qi Hanlin Xu +1 lainnya
Lihat Sumber

Abstrak

Diffusion-based Large Language Models (dLLMs) have emerged as a competitive alternative to autoregressive models, offering unique advantages through bidirectional attention and parallel generation paradigms. However, the generation results of current parallel decoding methods deviate from stepwise decoding, introducing potential performance degradation, which limits their practical deployment. To address this problem, we propose \textbf{S}elf \textbf{S}peculative \textbf{D}ecoding (SSD), a lossless inference acceleration method that leverages the dLLM itself as both speculative decoding drafter and verifier without auxiliary modules. SSD introduces a self-drafting mechanism where the model generates predictions for multiple positions, then verifies them through hierarchical verification trees in a single forward pass. Unlike traditional speculative decoding that requires separate draft models, SSD eliminates model redundancy and memory overhead by exploiting the dLLM's inherent parallel prediction capability for multiple positions. This self-speculative approach allows the model to progressively verify and accept multiple tokens in a single forward pass. Our experiments demonstrate that SSD achieves up to 3.46$\times$ speedup while keeping the output identical to stepwise decoding on open source models such as LLaDA and Dream. Code will be made publicly available on GitHub.

Topik & Kata Kunci

Penulis (6)

Y

Yifeng Gao

Z

Ziang Ji

Y

Yuxuan Wang

B

Biqing Qi

H

Hanlin Xu

L

Linfeng Zhang

Format Sitasi

Gao, Y., Ji, Z., Wang, Y., Qi, B., Xu, H., Zhang, L. (2025). Self Speculative Decoding for Diffusion Large Language Models. https://arxiv.org/abs/2510.04147

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓