Decentralized Q-Learning Supervisory Control for Coordinated Multi-Loop Tuning in Pump Stations
Abstrak
This paper introduces a reinforced learning-based supervisory control architecture that oversees multiple Recursive Least Squares (RLS) based self-tuning pump controllers and determines when each loop is permitted to adapt its gains. The supervisor learns adaptation policies that minimize interaction between loops while preserving responsiveness to changing hydraulic conditions. A two-loop pump station simulation is used to evaluate performance under product changes and transient flow disturbances. The results show that the supervisory layer reduces the number of simultaneous adaptation events by over 70%, leading to a 32% lower pressure-tracking error and 45% fewer gain-induced oscillations compared to conventional independent adaptive control. The reinforcement learning policy converges within 15 training episodes, resulting in stable adaptation scheduling and seamless transitions. The key novelty of this work lies in introducing decentralized reinforcement-learning-based coordination for adaptive pump control, enabling supervisory decision-making that actively prevents interference between controllers during transients. This approach provides a scalable and lightweight solution for coordinating multi-loop pump stations, enhancing robustness and operational performance in real-world pipeline systems.
Topik & Kata Kunci
Penulis (2)
David A. Brattley
Wayne W. Weaver
Akses Cepat
- Tahun Terbit
- 2026
- Sumber Database
- DOAJ
- DOI
- 10.3390/machines14030299
- Akses
- Open Access ✓