arXiv Open Access 2025

$\texttt{SPECS}$: Faster Test-Time Scaling through Speculative Drafts

Mert Cemri Nived Rajaraman Rishabh Tiwari Xiaoxuan Liu Kurt Keutzer +4 lainnya
Lihat Sumber

Abstrak

Scaling test-time compute has driven the recent advances in the reasoning capabilities of large language models (LLMs), typically by allocating additional computation for more thorough exploration. However, increased compute often comes at the expense of higher user-facing latency, directly impacting user experience. Current test-time scaling methods primarily optimize for accuracy based on total compute resources (FLOPS), often overlooking latency constraints. To address this gap, we propose $\texttt{SPECS}$, a latency-aware test-time scaling method inspired by speculative decoding. $\texttt{SPECS}$~uses a smaller, faster model to generate candidate sequences efficiently, and evaluates these candidates using signals from both a larger target model and a dedicated reward model. We introduce new integration strategies, including reward-guided soft verification and a reward-based deferral mechanism. Empirical results on MATH500, AMC23 and OlympiadBench datasets show that $\texttt{SPECS}$~matches or surpasses beam search accuracy while reducing latency by up to $\sim$19.1\%. Our theoretical analysis shows that our algorithm converges to the solution of a KL-regularized reinforcement learning objective with increasing beam width.

Topik & Kata Kunci

Penulis (9)

M

Mert Cemri

N

Nived Rajaraman

R

Rishabh Tiwari

X

Xiaoxuan Liu

K

Kurt Keutzer

I

Ion Stoica

K

Kannan Ramchandran

A

Ahmad Beirami

Z

Ziteng Sun

Format Sitasi

Cemri, M., Rajaraman, N., Tiwari, R., Liu, X., Keutzer, K., Stoica, I. et al. (2025). $\texttt{SPECS}$: Faster Test-Time Scaling through Speculative Drafts. https://arxiv.org/abs/2506.15733

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓