arXiv Open Access 2025

TPP-SD: Accelerating Transformer Point Process Sampling with Speculative Decoding

Shukai Gong Yiyang Fu Fengyuan Ran Quyu Kong Feng Zhou
Lihat Sumber

Abstrak

We propose TPP-SD, a novel approach that accelerates Transformer temporal point process (TPP) sampling by adapting speculative decoding (SD) techniques from language models. By identifying the structural similarities between thinning algorithms for TPPs and speculative decoding for language models, we develop an efficient sampling framework that leverages a smaller draft model to generate multiple candidate events, which are then verified by the larger target model in parallel. TPP-SD maintains the same output distribution as autoregressive sampling while achieving significant acceleration. Experiments on both synthetic and real datasets demonstrate that our approach produces samples from identical distributions as standard methods, but with 2-6$\times$ speedup. Our ablation studies analyze the impact of hyperparameters such as draft length and draft model size on sampling efficiency. TPP-SD bridges the gap between powerful Transformer TPP models and the practical need for rapid sequence sampling.

Topik & Kata Kunci

Penulis (5)

S

Shukai Gong

Y

Yiyang Fu

F

Fengyuan Ran

Q

Quyu Kong

F

Feng Zhou

Format Sitasi

Gong, S., Fu, Y., Ran, F., Kong, Q., Zhou, F. (2025). TPP-SD: Accelerating Transformer Point Process Sampling with Speculative Decoding. https://arxiv.org/abs/2507.09252

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓