arXiv Open Access 2026

RePO: Bridging On-Policy Learning and Off-Policy Knowledge through Rephrasing Policy Optimization

Linxuan Xia Xiaolong Yang Yongyuan Chen Enyue Zhao Deng Cai +2 lainnya
Lihat Sumber

Abstrak

Aligning large language models (LLMs) on domain-specific data remains a fundamental challenge. Supervised fine-tuning (SFT) offers a straightforward way to inject domain knowledge but often degrades the model's generality. In contrast, on-policy reinforcement learning (RL) preserves generality but fails to effectively assimilate hard samples that exceed the model's current reasoning level. Recent off-policy RL attempts improve hard sample utilization, yet they suffer from severe training instability due to the forced distribution shift toward off-policy knowledge. To reconcile effective off-policy knowledge absorption with the stability of on-policy RL, we propose Rephrasing Policy Optimization (RePO). In RePO, the policy model is prompted to first comprehend off-policy knowledge and then rephrase it into trajectories that conform to its own stylistic and parametric distribution. RePO dynamically replaces low-reward rollouts with these rephrased, high-quality trajectories. This strategy guides the model toward correct reasoning paths while strictly preserving on-policy training dynamics. Experiments on several benchmarks demonstrate that RePO improves hard-sample utilization and outperforms existing baselines, achieving state-of-the-art performance.

Topik & Kata Kunci

Penulis (7)

L

Linxuan Xia

X

Xiaolong Yang

Y

Yongyuan Chen

E

Enyue Zhao

D

Deng Cai

Y

Yasheng Wang

B

Boxi Wu

Format Sitasi

Xia, L., Yang, X., Chen, Y., Zhao, E., Cai, D., Wang, Y. et al. (2026). RePO: Bridging On-Policy Learning and Off-Policy Knowledge through Rephrasing Policy Optimization. https://arxiv.org/abs/2602.10819

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓