arXiv Open Access 2025

Token-Driven GammaTune: Adaptive Calibration for Enhanced Speculative Decoding

Aayush Gautam Susav Shrestha Narasimha Reddy
Lihat Sumber

Abstrak

Speculative decoding accelerates large language model (LLM) inference by using a smaller draft model to propose tokens, which are then verified by a larger target model. However, selecting an optimal speculation length is critical for maximizing speedup while minimizing wasted computation. We introduce \textit{GammaTune} and \textit{GammaTune+}, training-free adaptive algorithms that dynamically adjust speculation length based on token acceptance rates using a heuristic-based switching mechanism. Evaluated on SpecBench across multiple tasks and model pairs, our method outperforms other heuristic-based approaches and fixed-length speculative decoding, achieving an average speedup of 15\% ($\pm$5\%) with \textit{GammaTune} and 16\% ($\pm$3\%) with \textit{GammaTune+}, while reducing performance variance. This makes \textit{GammaTune} a robust and efficient solution for real-world deployment.

Topik & Kata Kunci

Penulis (3)

A

Aayush Gautam

S

Susav Shrestha

N

Narasimha Reddy

Format Sitasi

Gautam, A., Shrestha, S., Reddy, N. (2025). Token-Driven GammaTune: Adaptive Calibration for Enhanced Speculative Decoding. https://arxiv.org/abs/2504.00030

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓