arXiv Open Access 2024

Pretraining Decision Transformers with Reward Prediction for In-Context Multi-task Structured Bandit Learning

Subhojyoti Mukherjee Josiah P. Hanna Qiaomin Xie Robert Nowak
Lihat Sumber

Abstrak

We study learning to learn for the multi-task structured bandit problem where the goal is to learn a near-optimal algorithm that minimizes cumulative regret. The tasks share a common structure and an algorithm should exploit the shared structure to minimize the cumulative regret for an unseen but related test task. We use a transformer as a decision-making algorithm to learn this shared structure from data collected by a demonstrator on a set of training task instances. Our objective is to devise a training procedure such that the transformer will learn to outperform the demonstrator's learning algorithm on unseen test task instances. Prior work on pretraining decision transformers either requires privileged information like access to optimal arms or cannot outperform the demonstrator. Going beyond these approaches, we introduce a pre-training approach that trains a transformer network to learn a near-optimal policy in-context. This approach leverages the shared structure across tasks, does not require access to optimal actions, and can outperform the demonstrator. We validate these claims over a wide variety of structured bandit problems to show that our proposed solution is general and can quickly identify expected rewards on unseen test tasks to support effective exploration.

Topik & Kata Kunci

Penulis (4)

S

Subhojyoti Mukherjee

J

Josiah P. Hanna

Q

Qiaomin Xie

R

Robert Nowak

Format Sitasi

Mukherjee, S., Hanna, J.P., Xie, Q., Nowak, R. (2024). Pretraining Decision Transformers with Reward Prediction for In-Context Multi-task Structured Bandit Learning. https://arxiv.org/abs/2406.05064

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓