arXiv Open Access 2025

SEMDICE: Off-policy State Entropy Maximization via Stationary Distribution Correction Estimation

Jongmin Lee Meiqi Sun Pieter Abbeel
Lihat Sumber

Abstrak

In the unsupervised pre-training for reinforcement learning, the agent aims to learn a prior policy for downstream tasks without relying on task-specific reward functions. We focus on state entropy maximization (SEM), where the goal is to learn a policy that maximizes the entropy of the state stationary distribution. In this paper, we introduce SEMDICE, a principled off-policy algorithm that computes an SEM policy from an arbitrary off-policy dataset, which optimizes the policy directly within the space of stationary distributions. SEMDICE computes a single, stationary Markov state-entropy-maximizing policy from an arbitrary off-policy dataset. Experimental results demonstrate that SEMDICE outperforms baseline algorithms in maximizing state entropy while achieving the best adaptation efficiency for downstream tasks among SEM-based unsupervised RL pre-training methods.

Topik & Kata Kunci

Penulis (3)

J

Jongmin Lee

M

Meiqi Sun

P

Pieter Abbeel

Format Sitasi

Lee, J., Sun, M., Abbeel, P. (2025). SEMDICE: Off-policy State Entropy Maximization via Stationary Distribution Correction Estimation. https://arxiv.org/abs/2512.10042

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓