arXiv Open Access 2026

A Perturbation Approach to Unconstrained Linear Bandits

Andrew Jacobsen Dorian Baudry Shinji Ito Nicolò Cesa-Bianchi
Lihat Sumber

Abstrak

We revisit the standard perturbation-based approach of Abernethy et al. (2008) in the context of unconstrained Bandit Linear Optimization (uBLO). We show the surprising result that in the unconstrained setting, this approach effectively reduces Bandit Linear Optimization (BLO) to a standard Online Linear Optimization (OLO) problem. Our framework improves on prior work in several ways. First, we derive expected-regret guarantees when our perturbation scheme is combined with comparator-adaptive OLO algorithms, leading to new insights about the impact of different adversarial models on the resulting comparator-adaptive rates. We also extend our analysis to dynamic regret, obtaining the optimal $\sqrt{P_T}$ path-length dependencies without prior knowledge of $P_T$. We then develop the first high-probability guarantees for both static and dynamic regret in uBLO. Finally, we discuss lower bounds on the static regret, and prove the folklore $Ω(\sqrt{dT})$ rate for adversarial linear bandits on the unit Euclidean ball, which is of independent interest.

Topik & Kata Kunci

Penulis (4)

A

Andrew Jacobsen

D

Dorian Baudry

S

Shinji Ito

N

Nicolò Cesa-Bianchi

Format Sitasi

Jacobsen, A., Baudry, D., Ito, S., Cesa-Bianchi, N. (2026). A Perturbation Approach to Unconstrained Linear Bandits. https://arxiv.org/abs/2603.28201

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓