arXiv Open Access 2026

SCALAR: Learning and Composing Skills through LLM Guided Symbolic Planning and Deep RL Grounding

Renos Zabounidis Yue Wu Simon Stepputtis Woojun Kim Yuanzhi Li +2 lainnya
Lihat Sumber

Abstrak

LM-based agents excel when given high-level action APIs but struggle to ground language into low-level control. Prior work has LLMs generate skills or reward functions for RL, but these one-shot approaches lack feedback to correct specification errors. We introduce SCALAR, a bidirectional framework coupling LLM planning with RL through a learned skill library. The LLM proposes skills with preconditions and effects; RL trains policies for each skill and feeds back execution results to iteratively refine specifications, improving robustness to initial errors. Pivotal Trajectory Analysis corrects LLM priors by analyzing RL trajectories; Frontier Checkpointing optionally saves environment states at skill boundaries to improve sample efficiency. On Craftax, SCALAR achieves 88.2% diamond collection, a 1.9x improvement over the best baseline, and reaches the Gnomish Mines 9.1% of the time where prior methods fail entirely.

Topik & Kata Kunci

Penulis (7)

R

Renos Zabounidis

Y

Yue Wu

S

Simon Stepputtis

W

Woojun Kim

Y

Yuanzhi Li

T

Tom Mitchell

K

Katia Sycara

Format Sitasi

Zabounidis, R., Wu, Y., Stepputtis, S., Kim, W., Li, Y., Mitchell, T. et al. (2026). SCALAR: Learning and Composing Skills through LLM Guided Symbolic Planning and Deep RL Grounding. https://arxiv.org/abs/2603.09036

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓