arXiv Open Access 2025

Fast Inference via Hierarchical Speculative Decoding

Clara Mohri Haim Kaplan Tal Schuster Yishay Mansour Amir Globerson
Lihat Sumber

Abstrak

Transformer language models generate text autoregressively, making inference latency proportional to the number of tokens generated. Speculative decoding reduces this latency without sacrificing output quality, by leveraging a small draft model to propose tokens that the larger target model verifies in parallel. In practice, however, there may exist a set of potential draft models- ranging from faster but less inaccurate, to slower yet more reliable. We introduce Hierarchical Speculative Decoding (HSD), an algorithm that stacks these draft models into a hierarchy, where each model proposes tokens, and the next larger model verifies them in a single forward pass, until finally the target model verifies tokens. We derive an expression for the expected latency of any such hierarchy and show that selecting the latency-optimal hierarchy can be done in polynomial time. Empirically, HSD gives up to 1.2x speed-up over the best single-draft baseline, demonstrating the practicality of our algorithm in reducing generation latency beyond previous techniques.

Topik & Kata Kunci

Penulis (5)

C

Clara Mohri

H

Haim Kaplan

T

Tal Schuster

Y

Yishay Mansour

A

Amir Globerson

Format Sitasi

Mohri, C., Kaplan, H., Schuster, T., Mansour, Y., Globerson, A. (2025). Fast Inference via Hierarchical Speculative Decoding. https://arxiv.org/abs/2510.19705

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓