arXiv Open Access 2025

Learning-Based Stable Optimal Control for Infinite-Time Nonlinear Regulation Problems

Han Wang Di Wu Lin Cheng Shengping Gong Xu Huang
Lihat Sumber

Abstrak

Infinite-time nonlinear optimal regulation control is widely utilized in aerospace engineering as a systematic method for synthesizing stable controllers. However, conventional methods often rely on linearization hypothesis, while recent learning-based approaches rarely consider stability guarantees. This paper proposes a learning-based framework to learn a stable optimal controller for nonlinear optimal regulation problems. First, leveraging the equivalence between Pontryagin Maximum Principle (PMP) and Hamilton-Jacobi-Bellman (HJB) equation, we improve the backward generation of optimal examples (BGOE) method for infinite-time optimal regulation problems. A state-transition-matrix-guided data generation method is then proposed to efficiently generate a complete dataset that covers the desired state space. Finally, we incorporate the Lyapunov stability condition into the learning framework, ensuring the stability of the learned optimal policy by jointly learning the optimal value function and control policy. Simulations on three nonlinear optimal regulation problems show that the learned optimal policy achieves near-optimal regulation control and the code is provided at https://github.com/wong-han/PaperNORC

Topik & Kata Kunci

Penulis (5)

H

Han Wang

D

Di Wu

L

Lin Cheng

S

Shengping Gong

X

Xu Huang

Format Sitasi

Wang, H., Wu, D., Cheng, L., Gong, S., Huang, X. (2025). Learning-Based Stable Optimal Control for Infinite-Time Nonlinear Regulation Problems. https://arxiv.org/abs/2506.10291

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓