arXiv Open Access 2023

Optimal Transport Perturbations for Safe Reinforcement Learning with Robustness Guarantees

James Queeney Erhan Can Ozcan Ioannis Ch. Paschalidis Christos G. Cassandras
Lihat Sumber

Abstrak

Robustness and safety are critical for the trustworthy deployment of deep reinforcement learning. Real-world decision making applications require algorithms that can guarantee robust performance and safety in the presence of general environment disturbances, while making limited assumptions on the data collection process during training. In order to accomplish this goal, we introduce a safe reinforcement learning framework that incorporates robustness through the use of an optimal transport cost uncertainty set. We provide an efficient implementation based on applying Optimal Transport Perturbations to construct worst-case virtual state transitions, which does not impact data collection during training and does not require detailed simulator access. In experiments on continuous control tasks with safety constraints, our approach demonstrates robust performance while significantly improving safety at deployment time compared to standard safe reinforcement learning.

Topik & Kata Kunci

Penulis (4)

J

James Queeney

E

Erhan Can Ozcan

I

Ioannis Ch. Paschalidis

C

Christos G. Cassandras

Format Sitasi

Queeney, J., Ozcan, E.C., Paschalidis, I.C., Cassandras, C.G. (2023). Optimal Transport Perturbations for Safe Reinforcement Learning with Robustness Guarantees. https://arxiv.org/abs/2301.13375

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓