arXiv Open Access 2025

A New Error Temporal Difference Algorithm for Deep Reinforcement Learning in Microgrid Optimization

Fulong Yao Wanqing Zhao Matthew Forshaw
Lihat Sumber

Abstrak

Predictive control approaches based on deep reinforcement learning (DRL) have gained significant attention in microgrid energy optimization. However, existing research often overlooks the issue of uncertainty stemming from imperfect prediction models, which can lead to suboptimal control strategies. This paper presents a new error temporal difference (ETD) algorithm for DRL to address the uncertainty in predictions,aiming to improve the performance of microgrid operations. First,a microgrid system integrated with renewable energy sources (RES) and energy storage systems (ESS), along with its Markov decision process (MDP), is modelled. Second, a predictive control approach based on a deep Q network (DQN) is presented, in which a weighted average algorithm and a new ETD algorithm are designed to quantify and address the prediction uncertainty, respectively. Finally, simulations on a realworld US dataset suggest that the developed ETD effectively improves the performance of DRL in optimizing microgrid operations.

Topik & Kata Kunci

Penulis (3)

F

Fulong Yao

W

Wanqing Zhao

M

Matthew Forshaw

Format Sitasi

Yao, F., Zhao, W., Forshaw, M. (2025). A New Error Temporal Difference Algorithm for Deep Reinforcement Learning in Microgrid Optimization. https://arxiv.org/abs/2511.18093

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓