Efficient optimal power flow learning: A deep reinforcement learning with physics-driven critic model

The transition to decarbonized energy systems presents significant operational challenges due to increased uncertainties and complex dynamics. Deep reinforcement learning (DRL) has emerged as a powerful tool for optimizing power system operations. However, most existing DRL approaches rely on approx...

Full description

Saved in:
Bibliographic Details
Main Authors: Ahmed Sayed, Khaled Al Jaafari, Xian Zhang, Hatem Zeineldin, Ahmed Al-Durra, Guibin Wang, Ehab Elsaadany
Format: Article
Language:English
Published: Elsevier 2025-06-01
Series:International Journal of Electrical Power & Energy Systems
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S0142061525001723
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849768984577048576
author Ahmed Sayed
Khaled Al Jaafari
Xian Zhang
Hatem Zeineldin
Ahmed Al-Durra
Guibin Wang
Ehab Elsaadany
author_facet Ahmed Sayed
Khaled Al Jaafari
Xian Zhang
Hatem Zeineldin
Ahmed Al-Durra
Guibin Wang
Ehab Elsaadany
author_sort Ahmed Sayed
collection DOAJ
description The transition to decarbonized energy systems presents significant operational challenges due to increased uncertainties and complex dynamics. Deep reinforcement learning (DRL) has emerged as a powerful tool for optimizing power system operations. However, most existing DRL approaches rely on approximated data-driven critic networks, requiring numerous risky interactions to explore the environment and often facing estimation errors. To address these limitations, this paper proposes an efficient DRL algorithm with a physics-driven critic model, namely a differentiable holomorphic embedding load flow model (D-HELM). This approach enables accurate policy gradient computation through a differentiable loss function based on system states of realized uncertainties, simplifying both the replay buffer and the learning process. By leveraging continuation power flow principles, D-HELM ensures operable, feasible solutions while accelerating gradient steps through simple matrix operations. Simulation results across various test systems demonstrate the computational superiority of the proposed approach, outperforming state-of-the-art DRL algorithms during training and model-based solvers in online operations. This work represents a potential breakthrough in real-time energy system operations, with extensions to security-constrained decision-making, voltage control, unit commitment, and multi-energy systems.
format Article
id doaj-art-cdb2fdd1a84444af833bc2acb866c928
institution DOAJ
issn 0142-0615
language English
publishDate 2025-06-01
publisher Elsevier
record_format Article
series International Journal of Electrical Power & Energy Systems
spelling doaj-art-cdb2fdd1a84444af833bc2acb866c9282025-08-20T03:03:37ZengElsevierInternational Journal of Electrical Power & Energy Systems0142-06152025-06-0116711062110.1016/j.ijepes.2025.110621Efficient optimal power flow learning: A deep reinforcement learning with physics-driven critic modelAhmed Sayed0Khaled Al Jaafari1Xian Zhang2Hatem Zeineldin3Ahmed Al-Durra4Guibin Wang5Ehab Elsaadany6Electrical and Computer Engineering, Khalifa University, Abu Dhabi, 127788, United Arab Emirates; Faculty of Engineering, Cairo University, Giza, 12613, Egypt; Corresponding author at: Electrical and Computer Engineering, Khalifa University, Abu Dhabi, 127788, United Arab Emirates.Electrical and Computer Engineering, Khalifa University, Abu Dhabi, 127788, United Arab EmiratesMechanical Engineering and Automation, Harbin Institute of Technology, Shenzhen, 518055, ChinaElectrical and Computer Engineering, Khalifa University, Abu Dhabi, 127788, United Arab Emirates; Faculty of Engineering, Cairo University, Giza, 12613, EgyptElectrical and Computer Engineering, Khalifa University, Abu Dhabi, 127788, United Arab EmiratesMechatronics and Control Engineering, Shenzhen University, Shenzhen, 518060, China; Corresponding author at: Mechatronics and Control Engineering, Shenzhen University, Shenzhen, 518060, China.Electrical and Computer Engineering, Khalifa University, Abu Dhabi, 127788, United Arab EmiratesThe transition to decarbonized energy systems presents significant operational challenges due to increased uncertainties and complex dynamics. Deep reinforcement learning (DRL) has emerged as a powerful tool for optimizing power system operations. However, most existing DRL approaches rely on approximated data-driven critic networks, requiring numerous risky interactions to explore the environment and often facing estimation errors. To address these limitations, this paper proposes an efficient DRL algorithm with a physics-driven critic model, namely a differentiable holomorphic embedding load flow model (D-HELM). This approach enables accurate policy gradient computation through a differentiable loss function based on system states of realized uncertainties, simplifying both the replay buffer and the learning process. By leveraging continuation power flow principles, D-HELM ensures operable, feasible solutions while accelerating gradient steps through simple matrix operations. Simulation results across various test systems demonstrate the computational superiority of the proposed approach, outperforming state-of-the-art DRL algorithms during training and model-based solvers in online operations. This work represents a potential breakthrough in real-time energy system operations, with extensions to security-constrained decision-making, voltage control, unit commitment, and multi-energy systems.http://www.sciencedirect.com/science/article/pii/S0142061525001723Deep reinforcement learningOperable power flowReal-time economic controlHolomorphic embeddingPhysics-driven policy gradient
spellingShingle Ahmed Sayed
Khaled Al Jaafari
Xian Zhang
Hatem Zeineldin
Ahmed Al-Durra
Guibin Wang
Ehab Elsaadany
Efficient optimal power flow learning: A deep reinforcement learning with physics-driven critic model
International Journal of Electrical Power & Energy Systems
Deep reinforcement learning
Operable power flow
Real-time economic control
Holomorphic embedding
Physics-driven policy gradient
title Efficient optimal power flow learning: A deep reinforcement learning with physics-driven critic model
title_full Efficient optimal power flow learning: A deep reinforcement learning with physics-driven critic model
title_fullStr Efficient optimal power flow learning: A deep reinforcement learning with physics-driven critic model
title_full_unstemmed Efficient optimal power flow learning: A deep reinforcement learning with physics-driven critic model
title_short Efficient optimal power flow learning: A deep reinforcement learning with physics-driven critic model
title_sort efficient optimal power flow learning a deep reinforcement learning with physics driven critic model
topic Deep reinforcement learning
Operable power flow
Real-time economic control
Holomorphic embedding
Physics-driven policy gradient
url http://www.sciencedirect.com/science/article/pii/S0142061525001723
work_keys_str_mv AT ahmedsayed efficientoptimalpowerflowlearningadeepreinforcementlearningwithphysicsdrivencriticmodel
AT khaledaljaafari efficientoptimalpowerflowlearningadeepreinforcementlearningwithphysicsdrivencriticmodel
AT xianzhang efficientoptimalpowerflowlearningadeepreinforcementlearningwithphysicsdrivencriticmodel
AT hatemzeineldin efficientoptimalpowerflowlearningadeepreinforcementlearningwithphysicsdrivencriticmodel
AT ahmedaldurra efficientoptimalpowerflowlearningadeepreinforcementlearningwithphysicsdrivencriticmodel
AT guibinwang efficientoptimalpowerflowlearningadeepreinforcementlearningwithphysicsdrivencriticmodel
AT ehabelsaadany efficientoptimalpowerflowlearningadeepreinforcementlearningwithphysicsdrivencriticmodel