arXiv Open Access 2024

The N-Grammys: Accelerating Autoregressive Inference with Learning-Free Batched Speculation

Lawrence Stewart Matthew Trager Sujan Kumar Gonugondla Stefano Soatto
Lihat Sumber

Abstrak

Speculative decoding aims to speed up autoregressive generation of a language model by verifying in parallel the tokens generated by a smaller draft model.In this work, we explore the effectiveness of learning-free, negligible-cost draft strategies, namely $N$-grams obtained from the model weights and the context. While the predicted next token of the base model is rarely the top prediction of these simple strategies, we observe that it is often within their top-$k$ predictions for small $k$. Based on this, we show that combinations of simple strategies can achieve significant inference speedups over different tasks. The overall performance is comparable to more complex methods, yet does not require expensive preprocessing or modification of the base model, and allows for seamless `plug-and-play' integration into pipelines.

Topik & Kata Kunci

Penulis (4)

L

Lawrence Stewart

M

Matthew Trager

S

Sujan Kumar Gonugondla

S

Stefano Soatto

Format Sitasi

Stewart, L., Trager, M., Gonugondla, S.K., Soatto, S. (2024). The N-Grammys: Accelerating Autoregressive Inference with Learning-Free Batched Speculation. https://arxiv.org/abs/2411.03786

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓