arXiv Open Access 2026

Look Inward to Explore Outward: Learning Temperature Policy from LLM Internal States via Hierarchical RL

Yixiao Zhou Yang Li Dongzhou Cheng Hehe Fan Yu Cheng
Lihat Sumber

Abstrak

Reinforcement Learning from Verifiable Rewards (RLVR) trains large language models (LLMs) from sampled trajectories, making decoding strategy a core component of learning rather than a purely inference-time choice. Sampling temperature directly controls the exploration--exploitation trade-off by modulating policy entropy, yet existing methods rely on static values or heuristic adaptations that are decoupled from task-level rewards. We propose Introspective LLM, a hierarchical reinforcement learning framework that learns to control sampling temperature during generation. At each decoding step, the model selects a temperature based on its hidden state and samples the next token from the resulting distribution. Temperature and token policies are jointly optimized from downstream rewards using a coordinate ascent scheme. Experiments on mathematical reasoning benchmarks show that learned temperature policies outperform fixed and heuristic baselines, while exhibiting interpretable exploration behaviors aligned with reasoning uncertainty.

Topik & Kata Kunci

Penulis (5)

Y

Yixiao Zhou

Y

Yang Li

D

Dongzhou Cheng

H

Hehe Fan

Y

Yu Cheng

Format Sitasi

Zhou, Y., Li, Y., Cheng, D., Fan, H., Cheng, Y. (2026). Look Inward to Explore Outward: Learning Temperature Policy from LLM Internal States via Hierarchical RL. https://arxiv.org/abs/2602.13035

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓