arXiv Open Access 2026

Why and When Visual Token Pruning Fails? A Study on Relevant Visual Information Shift in MLLMs Decoding

Jiwan Kim Kibum Kim Wonjoong Kim Byung-Kwan Lee Chanyoung Park
Lihat Sumber

Abstrak

Recently, visual token pruning has been studied to handle the vast number of visual tokens in Multimodal Large Language Models. However, we observe that while existing pruning methods perform reliably on simple visual understanding, they struggle to effectively generalize to complex visual reasoning tasks, a critical gap underexplored in previous studies. Through a systematic analysis, we identify Relevant Visual Information Shift (RVIS) during decoding as the primary failure driver. To address this, we propose Decoding-stage Shift-aware Token Pruning (DSTP), a training-free add-on framework that enables existing pruning methods to align visual tokens with shifting reasoning requirements during the decoding stage. Extensive experiments demonstrate that DSTP significantly mitigates performance degradation of pruning methods in complex reasoning tasks, while consistently yielding performance gains even across visual understanding benchmarks. Furthermore, DSTP demonstrates effectiveness across diverse state-of-the-art architectures, highlighting its generalizability and efficiency with minimal computational overhead.

Topik & Kata Kunci

Penulis (5)

J

Jiwan Kim

K

Kibum Kim

W

Wonjoong Kim

B

Byung-Kwan Lee

C

Chanyoung Park

Format Sitasi

Kim, J., Kim, K., Kim, W., Lee, B., Park, C. (2026). Why and When Visual Token Pruning Fails? A Study on Relevant Visual Information Shift in MLLMs Decoding. https://arxiv.org/abs/2604.12358

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓