arXiv Open Access 2025

ARS: Adaptive Reasoning Suppression for Efficient Large Reasoning Language Models

Dongqi Zheng
Lihat Sumber

Abstrak

Large Reasoning Language Models (LRLMs or LRMs) demonstrate remarkable capabilities in complex reasoning tasks, but suffer from significant computational inefficiencies due to overthinking phenomena. Existing efficient reasoning methods face the challenge of balancing reasoning quality with inference cost reduction. We propose \textbf{Adaptive Reasoning Suppression (ARS)}, a novel training-free approach that dynamically suppresses redundant reasoning steps while preserving accuracy through adaptive certainty monitoring. ARS introduces a multi-checkpoint certainty estimation mechanism with progressive suppression thresholds, achieving superior efficiency compared to static suppression methods. Our extensive evaluation across mathematical reasoning benchmarks using multiple model architectures demonstrates that ARS achieves up to 53%, 46.1%, and 57.9% in token, latency and energy reduction, while maintaining or improving accuracy.

Topik & Kata Kunci

Penulis (1)

D

Dongqi Zheng

Format Sitasi

Zheng, D. (2025). ARS: Adaptive Reasoning Suppression for Efficient Large Reasoning Language Models. https://arxiv.org/abs/2510.00071

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓