Developing a Deep Reinforcement Learning Framework for Demand Side Response in Norway
Abstrak
Transmission system operators maintain grid stability using reserve markets; aggregators help small participants contribute by pooling their flexibility. Moreover, Reserve market prices and capacities are uncertain for the aggregator until the bidding deadline, and this underscores strategic approaches. This paper introduces a deep reinforcement learning framework tailored for aggregators that coordinate exclusively small-scale loads, participating in the Norwegian reserve markets. The proposed framework reflects a real-life bidding process, and multiple types of reinforcement learning models are used within the framework. The two datasets are hourly data from June and October, 2023, to evaluate how seasonal variations affect the models performance. First, the different models are trained on the data from the first three weeks of the given dataset and then tested on the last week of the dataset. For the testing of the models, they are tested against baseline values to give a good indication of whether the models are able to learn or not. From the test results, most models are performing better than the minimum baseline values and thus the models are able to learn, and the framework is feasible. Regarding the different type of reinforcement learning models trained and tested within this framework, the Deep Q-Network model performs most consistently on a higher level compared to the other models.
Topik & Kata Kunci
Penulis (4)
Sander Meland
Mojtaba Yousefi
Ahmad Hemmati
Troels Arnfred Bojesen
Akses Cepat
- Tahun Terbit
- 2025
- Sumber Database
- DOAJ
- DOI
- 10.1109/OAJPE.2025.3620107
- Akses
- Open Access ✓