Resilient dispatching optimization of power system driven by deep reinforcement learning model

Abstract Power systems face many complex and severe challenges in today's power sector. Within the system, the stability and reliability of the power supply are affected by the rapid adjustment of the energy structure. Outside the system, the frequent occurrence of extreme weather events also p...

Full description

Saved in:
Bibliographic Details
Main Authors: Haifeng Zhang, Yifu Zhang, Jiajun Zhang, Xiangdong Meng, Jiazu Sun
Format: Article
Language:English
Published: Springer 2025-07-01
Series:Discover Artificial Intelligence
Subjects:
Online Access:https://doi.org/10.1007/s44163-025-00451-1
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849764307507609600
author Haifeng Zhang
Yifu Zhang
Jiajun Zhang
Xiangdong Meng
Jiazu Sun
author_facet Haifeng Zhang
Yifu Zhang
Jiajun Zhang
Xiangdong Meng
Jiazu Sun
author_sort Haifeng Zhang
collection DOAJ
description Abstract Power systems face many complex and severe challenges in today's power sector. Within the system, the stability and reliability of the power supply are affected by the rapid adjustment of the energy structure. Outside the system, the frequent occurrence of extreme weather events also poses a considerable threat to the stable operation of the power system. To address these challenges effectively, this study applies the deep reinforcement learning model in power system elastic scheduling optimization. In this study, we propose an innovative power system scheduling strategy based on Deep Reinforcement Learning (DRL). The approach combines large-scale historical data with advanced predictive models to create a well-designed decision-making framework. This framework can simulate various complex working conditions online and realize real-time optimization of unit combination and load distribution according to different scenarios and conditions through real-time monitoring and analysis of the power grid operation status. In long-term simulation training, the deep reinforcement learning model can continuously learn and master the optimal regulation policy. When the system encounters sudden disturbances, the model can respond quickly, accurately regulate the power output, maintain the balance between power supply and demand, and ensure the stable operation of the power system. Through a series of rigorous experimental verifications, the deep reinforcement learning model shows significant advantages. Compared with traditional conventional scheduling methods, the system's operating cost is reduced by at least 10% under regular operation. This is mainly due to optimizing the model for unit composition and load distribution, avoiding unnecessary energy waste and equipment loss. In the face of disruptions, the system recovery time is dramatically reduced to less than half an hour. This allows the power system to quickly return to regular operation and reduce power outages' social and economic impact. In particular, it is worth mentioning that the success rate of power supply restoration is as high as 96% under the simulated central natural disaster scenario, which fully proves that the model significantly improves the anti-interference ability and power supply reliability of the power system, dramatically enhances the efficiency and reliability of power system scheduling, and provides a strong guarantee for the stable operation of the power system.
format Article
id doaj-art-7abc6bd0274a4da38e5113d4bbab9211
institution DOAJ
issn 2731-0809
language English
publishDate 2025-07-01
publisher Springer
record_format Article
series Discover Artificial Intelligence
spelling doaj-art-7abc6bd0274a4da38e5113d4bbab92112025-08-20T03:05:10ZengSpringerDiscover Artificial Intelligence2731-08092025-07-015112710.1007/s44163-025-00451-1Resilient dispatching optimization of power system driven by deep reinforcement learning modelHaifeng Zhang0Yifu Zhang1Jiajun Zhang2Xiangdong Meng3Jiazu Sun4State Grid Jilin Electric Power Research InstituteState Grid Jilin Electric Power Research InstituteState Grid Jilin Electric Power Research InstituteState Grid Jilin Electric Power Research InstituteKey Laboratory of Smart Grid of Ministry of Education, Tianjin UniversityAbstract Power systems face many complex and severe challenges in today's power sector. Within the system, the stability and reliability of the power supply are affected by the rapid adjustment of the energy structure. Outside the system, the frequent occurrence of extreme weather events also poses a considerable threat to the stable operation of the power system. To address these challenges effectively, this study applies the deep reinforcement learning model in power system elastic scheduling optimization. In this study, we propose an innovative power system scheduling strategy based on Deep Reinforcement Learning (DRL). The approach combines large-scale historical data with advanced predictive models to create a well-designed decision-making framework. This framework can simulate various complex working conditions online and realize real-time optimization of unit combination and load distribution according to different scenarios and conditions through real-time monitoring and analysis of the power grid operation status. In long-term simulation training, the deep reinforcement learning model can continuously learn and master the optimal regulation policy. When the system encounters sudden disturbances, the model can respond quickly, accurately regulate the power output, maintain the balance between power supply and demand, and ensure the stable operation of the power system. Through a series of rigorous experimental verifications, the deep reinforcement learning model shows significant advantages. Compared with traditional conventional scheduling methods, the system's operating cost is reduced by at least 10% under regular operation. This is mainly due to optimizing the model for unit composition and load distribution, avoiding unnecessary energy waste and equipment loss. In the face of disruptions, the system recovery time is dramatically reduced to less than half an hour. This allows the power system to quickly return to regular operation and reduce power outages' social and economic impact. In particular, it is worth mentioning that the success rate of power supply restoration is as high as 96% under the simulated central natural disaster scenario, which fully proves that the model significantly improves the anti-interference ability and power supply reliability of the power system, dramatically enhances the efficiency and reliability of power system scheduling, and provides a strong guarantee for the stable operation of the power system.https://doi.org/10.1007/s44163-025-00451-1Deep reinforcement learningPower mobilizationScheduling resilienceSystem optimization
spellingShingle Haifeng Zhang
Yifu Zhang
Jiajun Zhang
Xiangdong Meng
Jiazu Sun
Resilient dispatching optimization of power system driven by deep reinforcement learning model
Discover Artificial Intelligence
Deep reinforcement learning
Power mobilization
Scheduling resilience
System optimization
title Resilient dispatching optimization of power system driven by deep reinforcement learning model
title_full Resilient dispatching optimization of power system driven by deep reinforcement learning model
title_fullStr Resilient dispatching optimization of power system driven by deep reinforcement learning model
title_full_unstemmed Resilient dispatching optimization of power system driven by deep reinforcement learning model
title_short Resilient dispatching optimization of power system driven by deep reinforcement learning model
title_sort resilient dispatching optimization of power system driven by deep reinforcement learning model
topic Deep reinforcement learning
Power mobilization
Scheduling resilience
System optimization
url https://doi.org/10.1007/s44163-025-00451-1
work_keys_str_mv AT haifengzhang resilientdispatchingoptimizationofpowersystemdrivenbydeepreinforcementlearningmodel
AT yifuzhang resilientdispatchingoptimizationofpowersystemdrivenbydeepreinforcementlearningmodel
AT jiajunzhang resilientdispatchingoptimizationofpowersystemdrivenbydeepreinforcementlearningmodel
AT xiangdongmeng resilientdispatchingoptimizationofpowersystemdrivenbydeepreinforcementlearningmodel
AT jiazusun resilientdispatchingoptimizationofpowersystemdrivenbydeepreinforcementlearningmodel