Resilient dispatching optimization of power system driven by deep reinforcement learning model

Abstract Power systems face many complex and severe challenges in today's power sector. Within the system, the stability and reliability of the power supply are affected by the rapid adjustment of the energy structure. Outside the system, the frequent occurrence of extreme weather events also p...

Full description

Saved in:
Bibliographic Details
Main Authors: Haifeng Zhang, Yifu Zhang, Jiajun Zhang, Xiangdong Meng, Jiazu Sun
Format: Article
Language:English
Published: Springer 2025-07-01
Series:Discover Artificial Intelligence
Subjects:
Online Access:https://doi.org/10.1007/s44163-025-00451-1
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Power systems face many complex and severe challenges in today's power sector. Within the system, the stability and reliability of the power supply are affected by the rapid adjustment of the energy structure. Outside the system, the frequent occurrence of extreme weather events also poses a considerable threat to the stable operation of the power system. To address these challenges effectively, this study applies the deep reinforcement learning model in power system elastic scheduling optimization. In this study, we propose an innovative power system scheduling strategy based on Deep Reinforcement Learning (DRL). The approach combines large-scale historical data with advanced predictive models to create a well-designed decision-making framework. This framework can simulate various complex working conditions online and realize real-time optimization of unit combination and load distribution according to different scenarios and conditions through real-time monitoring and analysis of the power grid operation status. In long-term simulation training, the deep reinforcement learning model can continuously learn and master the optimal regulation policy. When the system encounters sudden disturbances, the model can respond quickly, accurately regulate the power output, maintain the balance between power supply and demand, and ensure the stable operation of the power system. Through a series of rigorous experimental verifications, the deep reinforcement learning model shows significant advantages. Compared with traditional conventional scheduling methods, the system's operating cost is reduced by at least 10% under regular operation. This is mainly due to optimizing the model for unit composition and load distribution, avoiding unnecessary energy waste and equipment loss. In the face of disruptions, the system recovery time is dramatically reduced to less than half an hour. This allows the power system to quickly return to regular operation and reduce power outages' social and economic impact. In particular, it is worth mentioning that the success rate of power supply restoration is as high as 96% under the simulated central natural disaster scenario, which fully proves that the model significantly improves the anti-interference ability and power supply reliability of the power system, dramatically enhances the efficiency and reliability of power system scheduling, and provides a strong guarantee for the stable operation of the power system.
ISSN:2731-0809