arXiv Open Access 2024

Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative Decoding

Heming Xia Zhe Yang Qingxiu Dong Peiyi Wang Yongqi Li +4 lainnya
Lihat Sumber

Abstrak

To mitigate the high inference latency stemming from autoregressive decoding in Large Language Models (LLMs), Speculative Decoding has emerged as a novel decoding paradigm for LLM inference. In each decoding step, this method first drafts several future tokens efficiently and then verifies them in parallel. Unlike autoregressive decoding, Speculative Decoding facilitates the simultaneous decoding of multiple tokens per step, thereby accelerating inference. This paper presents a comprehensive overview and analysis of this promising decoding paradigm. We begin by providing a formal definition and formulation of Speculative Decoding. Then, we organize in-depth discussions on its key facets, such as drafter selection and verification strategies. Furthermore, we present a comparative analysis of leading methods under third-party testing environments. We aim for this work to serve as a catalyst for further research on Speculative Decoding, ultimately contributing to more efficient LLM inference.

Topik & Kata Kunci

Penulis (9)

H

Heming Xia

Z

Zhe Yang

Q

Qingxiu Dong

P

Peiyi Wang

Y

Yongqi Li

T

Tao Ge

T

Tianyu Liu

W

Wenjie Li

Z

Zhifang Sui

Format Sitasi

Xia, H., Yang, Z., Dong, Q., Wang, P., Li, Y., Ge, T. et al. (2024). Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative Decoding. https://arxiv.org/abs/2401.07851

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓