Two-sided Energy Storage Cooperative Scheduling Method for Transmission and Distribution Network Based on Multi-agent Attention-deep Reinforcement Learning
ObjectiveIn the new power system, energy storage devices are constrained by geographical limitations and single dispatch methods, leading to low utilization efficiency, which severely restricts the effective integration of renewable energy. To address this, we consider integrating energy storage res...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Editorial Department of Journal of Sichuan University (Engineering Science Edition)
2025-01-01
|
| Series: | 工程科学与技术 |
| Subjects: | |
| Online Access: | http://jsuese.scu.edu.cn/thesisDetails#10.12454/j.jsuese.202400703 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849734944949010432 |
|---|---|
| author | CHEN Shi ZHU Yujie LIU Yihong XU Liuchao TANG Guodeng |
| author_facet | CHEN Shi ZHU Yujie LIU Yihong XU Liuchao TANG Guodeng |
| author_sort | CHEN Shi |
| collection | DOAJ |
| description | ObjectiveIn the new power system, energy storage devices are constrained by geographical limitations and single dispatch methods, leading to low utilization efficiency, which severely restricts the effective integration of renewable energy. To address this, we consider integrating energy storage resources from the transmission grid's power supply side and the distribution grid's user side. Through coordinated collaboration on both sides, we aim to explore more efficient energy storage utilization methods and enhance renewable energy integration capabilities.MethodsThis paper proposes a coordinated dispatch method for energy storage on both transmission and distribution networks based on multi-agent attention-deep reinforcement learning. First, considering the renewable energy integration needs of power suppliers and the profit-driven needs of energy storage providers, a cooperative alliance is established between the two parties. An improved Shapley value is used to allocate additional income, providing a cooperative incentive. A coordinated dispatch model for energy storage on both transmission and distribution networks is constructed. The Multi-Agent Attention Noisy Twin Delayed Deep Deterministic Policy Gradient Algorithm (MAAN-TD3) is employed to obtain the optimal solution. The attention mechanism is introduced into the evaluation network to capture interdependencies among agents, enabling potential intent recognition and cooperative behavior perception, thereby improving algorithm convergence. Additionally, noise is added to expand the exploration space, enhancing training stability.Results and Discussions Using a modified IEEE transmission-distribution joint system as an example, the modeling and solution demonstrate that the multi-agent attention mechanism can strengthen the focus among collaborators, thereby balancing the interests of both parties. Compared to traditional methods, the convergence speed and optimization performance are significantly improved. Energy storage idle time is reduced by 7 hours per day, and renewable energy integration increases by 6.81 MWh per day.ConclusionsThe experimental results show that the proposed method effectively reduces the total alliance cost, increases the economic benefits for all alliance participants, improves the utilization rate of idle energy storage devices, and promotes the integration of wind power and other renewable energy sources. |
| format | Article |
| id | doaj-art-77fea72042434235ac89a4928f4f68d9 |
| institution | DOAJ |
| issn | 2096-3246 |
| language | English |
| publishDate | 2025-01-01 |
| publisher | Editorial Department of Journal of Sichuan University (Engineering Science Edition) |
| record_format | Article |
| series | 工程科学与技术 |
| spelling | doaj-art-77fea72042434235ac89a4928f4f68d92025-08-20T03:07:40ZengEditorial Department of Journal of Sichuan University (Engineering Science Edition)工程科学与技术2096-32462025-01-0111388579783Two-sided Energy Storage Cooperative Scheduling Method for Transmission and Distribution Network Based on Multi-agent Attention-deep Reinforcement LearningCHEN ShiZHU YujieLIU YihongXU LiuchaoTANG GuodengObjectiveIn the new power system, energy storage devices are constrained by geographical limitations and single dispatch methods, leading to low utilization efficiency, which severely restricts the effective integration of renewable energy. To address this, we consider integrating energy storage resources from the transmission grid's power supply side and the distribution grid's user side. Through coordinated collaboration on both sides, we aim to explore more efficient energy storage utilization methods and enhance renewable energy integration capabilities.MethodsThis paper proposes a coordinated dispatch method for energy storage on both transmission and distribution networks based on multi-agent attention-deep reinforcement learning. First, considering the renewable energy integration needs of power suppliers and the profit-driven needs of energy storage providers, a cooperative alliance is established between the two parties. An improved Shapley value is used to allocate additional income, providing a cooperative incentive. A coordinated dispatch model for energy storage on both transmission and distribution networks is constructed. The Multi-Agent Attention Noisy Twin Delayed Deep Deterministic Policy Gradient Algorithm (MAAN-TD3) is employed to obtain the optimal solution. The attention mechanism is introduced into the evaluation network to capture interdependencies among agents, enabling potential intent recognition and cooperative behavior perception, thereby improving algorithm convergence. Additionally, noise is added to expand the exploration space, enhancing training stability.Results and Discussions Using a modified IEEE transmission-distribution joint system as an example, the modeling and solution demonstrate that the multi-agent attention mechanism can strengthen the focus among collaborators, thereby balancing the interests of both parties. Compared to traditional methods, the convergence speed and optimization performance are significantly improved. Energy storage idle time is reduced by 7 hours per day, and renewable energy integration increases by 6.81 MWh per day.ConclusionsThe experimental results show that the proposed method effectively reduces the total alliance cost, increases the economic benefits for all alliance participants, improves the utilization rate of idle energy storage devices, and promotes the integration of wind power and other renewable energy sources.http://jsuese.scu.edu.cn/thesisDetails#10.12454/j.jsuese.202400703deep reinforcement learningnew energy generationshared energy storagemulti-agentoptimal scheduling |
| spellingShingle | CHEN Shi ZHU Yujie LIU Yihong XU Liuchao TANG Guodeng Two-sided Energy Storage Cooperative Scheduling Method for Transmission and Distribution Network Based on Multi-agent Attention-deep Reinforcement Learning 工程科学与技术 deep reinforcement learning new energy generation shared energy storage multi-agent optimal scheduling |
| title | Two-sided Energy Storage Cooperative Scheduling Method for Transmission and Distribution Network Based on Multi-agent Attention-deep Reinforcement Learning |
| title_full | Two-sided Energy Storage Cooperative Scheduling Method for Transmission and Distribution Network Based on Multi-agent Attention-deep Reinforcement Learning |
| title_fullStr | Two-sided Energy Storage Cooperative Scheduling Method for Transmission and Distribution Network Based on Multi-agent Attention-deep Reinforcement Learning |
| title_full_unstemmed | Two-sided Energy Storage Cooperative Scheduling Method for Transmission and Distribution Network Based on Multi-agent Attention-deep Reinforcement Learning |
| title_short | Two-sided Energy Storage Cooperative Scheduling Method for Transmission and Distribution Network Based on Multi-agent Attention-deep Reinforcement Learning |
| title_sort | two sided energy storage cooperative scheduling method for transmission and distribution network based on multi agent attention deep reinforcement learning |
| topic | deep reinforcement learning new energy generation shared energy storage multi-agent optimal scheduling |
| url | http://jsuese.scu.edu.cn/thesisDetails#10.12454/j.jsuese.202400703 |
| work_keys_str_mv | AT chenshi twosidedenergystoragecooperativeschedulingmethodfortransmissionanddistributionnetworkbasedonmultiagentattentiondeepreinforcementlearning AT zhuyujie twosidedenergystoragecooperativeschedulingmethodfortransmissionanddistributionnetworkbasedonmultiagentattentiondeepreinforcementlearning AT liuyihong twosidedenergystoragecooperativeschedulingmethodfortransmissionanddistributionnetworkbasedonmultiagentattentiondeepreinforcementlearning AT xuliuchao twosidedenergystoragecooperativeschedulingmethodfortransmissionanddistributionnetworkbasedonmultiagentattentiondeepreinforcementlearning AT tangguodeng twosidedenergystoragecooperativeschedulingmethodfortransmissionanddistributionnetworkbasedonmultiagentattentiondeepreinforcementlearning |