Data augmented offline deep reinforcement learning for stochastic dynamic power dispatch

Operating a power system under uncertainty while ensuring both economic efficiency and system security can be formulated as a stochastic dynamic economic dispatch (DED) problem. Deep reinforcement learning (DRL) offers a promising solution by learning dispatch policies through extensive system inter...

Full description

Saved in:
Bibliographic Details
Main Authors: Wencong Xiao, Tao Yu, Zhiwei Chen, Zhenning Pan, Yufeng Wu, Qianjin Liu
Format: Article
Language:English
Published: Elsevier 2025-08-01
Series:International Journal of Electrical Power & Energy Systems
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S0142061525002984
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Operating a power system under uncertainty while ensuring both economic efficiency and system security can be formulated as a stochastic dynamic economic dispatch (DED) problem. Deep reinforcement learning (DRL) offers a promising solution by learning dispatch policies through extensive system interaction and trial-and-error. However, the effectiveness of DRL is constrained by two key limitations: the high cost of real-time system interactions and the limited diversity of historical scenarios. To address these challenges, this paper proposes an offline deep reinforcement learning (ODRL) framework tailored for power system dispatch. First, a conditional generative adversarial network (CGAN) is employed to augment historical scenarios, thereby improving data diversity. The resulting training dataset combines both real and synthetically generated scenarios. Second, a conservative offline soft actor-critic (COSAC) algorithm is developed to learn dispatch policies directly from this hybrid offline dataset, eliminating the need for online interaction. Experimental results demonstrate that the proposed approach significantly outperforms both conventional DRL and existing offline learning methods in terms of reliability and economic performance.
ISSN:0142-0615