A Two-Layer User Energy Management Strategy for Virtual Power Plants Based on HG-Multi-Agent Reinforcement Learning

Household loads are becoming dominant in virtual power plants (VPP). However, their dispatch potential has not yet been explored due to the lack of detailed user power management. To solve this issue, a novel two-layer user energy management strategy based on HG-multi-agent reinforcement learning ha...

Full description

Saved in:
Bibliographic Details
Main Authors: Sen Tian, Qian Xiao, Tianxiang Li, Zibo Wang, Ji Qiao, Hong Zhu, Wenlu Ji
Format: Article
Language:English
Published: MDPI AG 2025-06-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/12/6713
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849433250874785792
author Sen Tian
Qian Xiao
Tianxiang Li
Zibo Wang
Ji Qiao
Hong Zhu
Wenlu Ji
author_facet Sen Tian
Qian Xiao
Tianxiang Li
Zibo Wang
Ji Qiao
Hong Zhu
Wenlu Ji
author_sort Sen Tian
collection DOAJ
description Household loads are becoming dominant in virtual power plants (VPP). However, their dispatch potential has not yet been explored due to the lack of detailed user power management. To solve this issue, a novel two-layer user energy management strategy based on HG-multi-agent reinforcement learning has been proposed in this paper. Firstly, a novel two-layer optimization framework is established, where the upper layer is applied to coordinate the scheduling and benefit allocation among various stakeholders and the lower layer is applied to execute intelligent decision-making for users. Secondly, the mathematical model for the framework is established, where a detailed household power management model is proposed in the lower layer, and the generated predicted power demands are used to replace the conventional aggregate model in the upper layer. As a result, the energy consumption behaviors of household users can be precisely described in the scheduling scheme. Furthermore, an HG-multi-agent reinforcement-based method is applied to accelerate the game-solving process. Case study results indicate that the proposed method leads to a reduction in user costs and an increase in VPP profit.
format Article
id doaj-art-c889bce9a4c243b885ccb9d07c4899b8
institution Kabale University
issn 2076-3417
language English
publishDate 2025-06-01
publisher MDPI AG
record_format Article
series Applied Sciences
spelling doaj-art-c889bce9a4c243b885ccb9d07c4899b82025-08-20T03:27:07ZengMDPI AGApplied Sciences2076-34172025-06-011512671310.3390/app15126713A Two-Layer User Energy Management Strategy for Virtual Power Plants Based on HG-Multi-Agent Reinforcement LearningSen Tian0Qian Xiao1Tianxiang Li2Zibo Wang3Ji Qiao4Hong Zhu5Wenlu Ji6State Key Laboratory of Intelligent Power Distribution Equipment and System, Tianjin University, Tianjin 300072, ChinaState Key Laboratory of Intelligent Power Distribution Equipment and System, Tianjin University, Tianjin 300072, ChinaState Key Laboratory of Intelligent Power Distribution Equipment and System, Tianjin University, Tianjin 300072, ChinaChina Electric Power Research Institute, Beijing 100192, ChinaChina Electric Power Research Institute, Beijing 100192, ChinaNanjing Power Supply Company, State Grid Jiangsu Electric Power Co., Nanjing 210019, ChinaNanjing Power Supply Company, State Grid Jiangsu Electric Power Co., Nanjing 210019, ChinaHousehold loads are becoming dominant in virtual power plants (VPP). However, their dispatch potential has not yet been explored due to the lack of detailed user power management. To solve this issue, a novel two-layer user energy management strategy based on HG-multi-agent reinforcement learning has been proposed in this paper. Firstly, a novel two-layer optimization framework is established, where the upper layer is applied to coordinate the scheduling and benefit allocation among various stakeholders and the lower layer is applied to execute intelligent decision-making for users. Secondly, the mathematical model for the framework is established, where a detailed household power management model is proposed in the lower layer, and the generated predicted power demands are used to replace the conventional aggregate model in the upper layer. As a result, the energy consumption behaviors of household users can be precisely described in the scheduling scheme. Furthermore, an HG-multi-agent reinforcement-based method is applied to accelerate the game-solving process. Case study results indicate that the proposed method leads to a reduction in user costs and an increase in VPP profit.https://www.mdpi.com/2076-3417/15/12/6713hierarchical gamevirtual power plantmulti-agent reinforcement learningoptimized schedulingdemand responseenergy management strategy
spellingShingle Sen Tian
Qian Xiao
Tianxiang Li
Zibo Wang
Ji Qiao
Hong Zhu
Wenlu Ji
A Two-Layer User Energy Management Strategy for Virtual Power Plants Based on HG-Multi-Agent Reinforcement Learning
Applied Sciences
hierarchical game
virtual power plant
multi-agent reinforcement learning
optimized scheduling
demand response
energy management strategy
title A Two-Layer User Energy Management Strategy for Virtual Power Plants Based on HG-Multi-Agent Reinforcement Learning
title_full A Two-Layer User Energy Management Strategy for Virtual Power Plants Based on HG-Multi-Agent Reinforcement Learning
title_fullStr A Two-Layer User Energy Management Strategy for Virtual Power Plants Based on HG-Multi-Agent Reinforcement Learning
title_full_unstemmed A Two-Layer User Energy Management Strategy for Virtual Power Plants Based on HG-Multi-Agent Reinforcement Learning
title_short A Two-Layer User Energy Management Strategy for Virtual Power Plants Based on HG-Multi-Agent Reinforcement Learning
title_sort two layer user energy management strategy for virtual power plants based on hg multi agent reinforcement learning
topic hierarchical game
virtual power plant
multi-agent reinforcement learning
optimized scheduling
demand response
energy management strategy
url https://www.mdpi.com/2076-3417/15/12/6713
work_keys_str_mv AT sentian atwolayeruserenergymanagementstrategyforvirtualpowerplantsbasedonhgmultiagentreinforcementlearning
AT qianxiao atwolayeruserenergymanagementstrategyforvirtualpowerplantsbasedonhgmultiagentreinforcementlearning
AT tianxiangli atwolayeruserenergymanagementstrategyforvirtualpowerplantsbasedonhgmultiagentreinforcementlearning
AT zibowang atwolayeruserenergymanagementstrategyforvirtualpowerplantsbasedonhgmultiagentreinforcementlearning
AT jiqiao atwolayeruserenergymanagementstrategyforvirtualpowerplantsbasedonhgmultiagentreinforcementlearning
AT hongzhu atwolayeruserenergymanagementstrategyforvirtualpowerplantsbasedonhgmultiagentreinforcementlearning
AT wenluji atwolayeruserenergymanagementstrategyforvirtualpowerplantsbasedonhgmultiagentreinforcementlearning
AT sentian twolayeruserenergymanagementstrategyforvirtualpowerplantsbasedonhgmultiagentreinforcementlearning
AT qianxiao twolayeruserenergymanagementstrategyforvirtualpowerplantsbasedonhgmultiagentreinforcementlearning
AT tianxiangli twolayeruserenergymanagementstrategyforvirtualpowerplantsbasedonhgmultiagentreinforcementlearning
AT zibowang twolayeruserenergymanagementstrategyforvirtualpowerplantsbasedonhgmultiagentreinforcementlearning
AT jiqiao twolayeruserenergymanagementstrategyforvirtualpowerplantsbasedonhgmultiagentreinforcementlearning
AT hongzhu twolayeruserenergymanagementstrategyforvirtualpowerplantsbasedonhgmultiagentreinforcementlearning
AT wenluji twolayeruserenergymanagementstrategyforvirtualpowerplantsbasedonhgmultiagentreinforcementlearning