Entropy-driven multi agent deep reinforcement learning for resilient distribution networks: coordinating MESS and microgrids
In extreme disasters where severe main grid failures lead to widespread power outages in distribution networks, rapid critical load restoration (CLR) becomes crucial for enhancing power supply reliability. Aiming to improve distribution network resilience, this paper proposes an entropy-driven multi...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Elsevier
2025-09-01
|
| Series: | International Journal of Electrical Power & Energy Systems |
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S0142061525005162 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | In extreme disasters where severe main grid failures lead to widespread power outages in distribution networks, rapid critical load restoration (CLR) becomes crucial for enhancing power supply reliability. Aiming to improve distribution network resilience, this paper proposes an entropy-driven multi-agent deep reinforcement learning (MADRL) framework coordinating mobile energy storage systems (MESS) and microgrid reconfiguration. First, with critical load restoration as the objective function, coordinated optimization model for MESS dispatch and network reconfiguration is constructed, which comprehensively considers security constraints of both distribution networks and microgrids. Subsequently, the coordinated optimization problem is formulated as a Markov Decision Process (MDP). Then, a multi-agent deep Q-learning (MADQL) algorithm is developed to search for optimal strategies, featuring a topology aware entropy-driven exploration (TAEE) mechanism to discover high-value actions and accelerate training convergence. Additionally, an action masking technique is introduced to enforce operational safety by dynamically filtering constraint-violating actions. Finally, extensive numerical results validate the effectiveness of our proposed method. |
|---|---|
| ISSN: | 0142-0615 |