DOAJ Open Access 2024

Distributionally Robust Policy and Lyapunov-Certificate Learning

Kehan Long Jorge Cortes Nikolay Atanasov

Abstrak

This article presents novel methods for synthesizing distributionally robust stabilizing neural controllers and certificates for control systems under model uncertainty. A key challenge in designing controllers with stability guarantees for uncertain systems is the accurate determination of and adaptation to shifts in model parametric uncertainty during online deployment. We tackle this with a novel distributionally robust formulation of the Lyapunov derivative chance constraint ensuring a monotonic decrease of the Lyapunov certificate. To avoid the computational complexity involved in dealing with the space of probability measures, we identify a sufficient condition in the form of deterministic convex constraints that ensures the Lyapunov derivative constraint is satisfied. We integrate this condition into a loss function for training a neural network-based controller and show that, for the resulting closed-loop system, the global asymptotic stability of its equilibrium can be certified with high confidence, even with Out-of-Distribution (OoD) model uncertainties. To demonstrate the efficacy and efficiency of the proposed methodology, we compare it with an uncertainty-agnostic baseline approach and several reinforcement learning approaches in two control problems in simulation. Open-source implementations of the examples are available at <uri>https://github.com/KehanLong/DR_Stabilizing_Policy</uri>.

Penulis (3)

K

Kehan Long

J

Jorge Cortes

N

Nikolay Atanasov

Format Sitasi

Long, K., Cortes, J., Atanasov, N. (2024). Distributionally Robust Policy and Lyapunov-Certificate Learning. https://doi.org/10.1109/OJCSYS.2024.3440051

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.1109/OJCSYS.2024.3440051
Informasi Jurnal
Tahun Terbit
2024
Sumber Database
DOAJ
DOI
10.1109/OJCSYS.2024.3440051
Akses
Open Access ✓