DOAJ Open Access 2026

Decentralized Q-Learning Supervisory Control for Coordinated Multi-Loop Tuning in Pump Stations

David A. Brattley Wayne W. Weaver

Abstrak

This paper introduces a reinforced learning-based supervisory control architecture that oversees multiple Recursive Least Squares (RLS) based self-tuning pump controllers and determines when each loop is permitted to adapt its gains. The supervisor learns adaptation policies that minimize interaction between loops while preserving responsiveness to changing hydraulic conditions. A two-loop pump station simulation is used to evaluate performance under product changes and transient flow disturbances. The results show that the supervisory layer reduces the number of simultaneous adaptation events by over 70%, leading to a 32% lower pressure-tracking error and 45% fewer gain-induced oscillations compared to conventional independent adaptive control. The reinforcement learning policy converges within 15 training episodes, resulting in stable adaptation scheduling and seamless transitions. The key novelty of this work lies in introducing decentralized reinforcement-learning-based coordination for adaptive pump control, enabling supervisory decision-making that actively prevents interference between controllers during transients. This approach provides a scalable and lightweight solution for coordinating multi-loop pump stations, enhancing robustness and operational performance in real-world pipeline systems.

Penulis (2)

D

David A. Brattley

W

Wayne W. Weaver

Format Sitasi

Brattley, D.A., Weaver, W.W. (2026). Decentralized Q-Learning Supervisory Control for Coordinated Multi-Loop Tuning in Pump Stations. https://doi.org/10.3390/machines14030299

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.3390/machines14030299
Informasi Jurnal
Tahun Terbit
2026
Sumber Database
DOAJ
DOI
10.3390/machines14030299
Akses
Open Access ✓