arXiv Open Access 2025

Confidence-Modulated Speculative Decoding for Large Language Models

Jaydip Sen Subhasis Dasgupta Hetvi Waghela
Lihat Sumber

Abstrak

Speculative decoding has emerged as an effective approach for accelerating autoregressive inference by parallelizing token generation through a draft-then-verify paradigm. However, existing methods rely on static drafting lengths and rigid verification criteria, limiting their adaptability across varying model uncertainties and input complexities. This paper proposes an information-theoretic framework for speculative decoding based on confidence-modulated drafting. By leveraging entropy and margin-based uncertainty measures over the drafter's output distribution, the proposed method dynamically adjusts the number of speculatively generated tokens at each iteration. This adaptive mechanism reduces rollback frequency, improves resource utilization, and maintains output fidelity. Additionally, the verification process is modulated using the same confidence signals, enabling more flexible acceptance of drafted tokens without sacrificing generation quality. Experiments on machine translation and summarization tasks demonstrate significant speedups over standard speculative decoding while preserving or improving BLEU and ROUGE scores. The proposed approach offers a principled, plug-in method for efficient and robust decoding in large language models under varying conditions of uncertainty.

Topik & Kata Kunci

Penulis (3)

J

Jaydip Sen

S

Subhasis Dasgupta

H

Hetvi Waghela

Format Sitasi

Sen, J., Dasgupta, S., Waghela, H. (2025). Confidence-Modulated Speculative Decoding for Large Language Models. https://arxiv.org/abs/2508.15371

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓