Residential Energy Management Method Based on the Proposed A3C-FER
Deep reinforcement learning has been widely applied in the field of residential energy management, showcasing considerable promise in enhancing energy efficiency and reducing energy consumption. However, it is observed that some methodologies still suffer from inadequate data exploitation, which res...
Saved in:
| Main Authors: | , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10843226/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850227477833580544 |
|---|---|
| author | Jinjiang Zhang Qiang Lin Lu Wang Orefo Victor Arinze Zihan Hu Yantai Huang |
| author_facet | Jinjiang Zhang Qiang Lin Lu Wang Orefo Victor Arinze Zihan Hu Yantai Huang |
| author_sort | Jinjiang Zhang |
| collection | DOAJ |
| description | Deep reinforcement learning has been widely applied in the field of residential energy management, showcasing considerable promise in enhancing energy efficiency and reducing energy consumption. However, it is observed that some methodologies still suffer from inadequate data exploitation, which results in suboptimal policy performance. In this study, focusing on the residential energy management system, an innovative reinforcement learning method is proposed. This novel method fuses the asynchronous advantage actor-critic architecture with a familiarity-based experience replay mechanism, with the ambition of markedly improving learning efficiency and control performance. Numerical comparisons were made to justify the effectiveness of the method. Experimental results across diverse cases confirm that the proposed algorithm can effectively achieve optimal energy scheduling for residential sectors. Furthermore, the proposed methodology has demonstrated a notable reduction in grid interaction expenses, achieving a decrease of 27.03% and 16.38% relative to the other two scenarios. In comparison with the Proximal Policy Optimization (PPO) and Deep Q-Network (DQN) algorithms, the novel approach not only improves the average reward value post-convergence by 38.48% and 47.17%, respectively, but also significantly reduces the training duration by 81.19% within a multi-threaded computational environment. |
| format | Article |
| id | doaj-art-a08125a36fe54e6bb4544dc0b911290c |
| institution | OA Journals |
| issn | 2169-3536 |
| language | English |
| publishDate | 2025-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Access |
| spelling | doaj-art-a08125a36fe54e6bb4544dc0b911290c2025-08-20T02:04:49ZengIEEEIEEE Access2169-35362025-01-0113122031221410.1109/ACCESS.2025.352987210843226Residential Energy Management Method Based on the Proposed A3C-FERJinjiang Zhang0Qiang Lin1Lu Wang2Orefo Victor Arinze3Zihan Hu4https://orcid.org/0000-0003-3291-4010Yantai Huang5https://orcid.org/0000-0002-3451-4937School of Automation and Electrical Engineering, Zhejiang University of Science and Technology, Hangzhou, ChinaSchool of Automation and Electrical Engineering, Zhejiang University of Science and Technology, Hangzhou, ChinaSchool of Automation and Electrical Engineering, Zhejiang University of Science and Technology, Hangzhou, ChinaSchool of Automation and Electrical Engineering, Zhejiang University of Science and Technology, Hangzhou, ChinaAlibaba Group, Hangzhou, ChinaSchool of Automation and Electrical Engineering, Zhejiang University of Science and Technology, Hangzhou, ChinaDeep reinforcement learning has been widely applied in the field of residential energy management, showcasing considerable promise in enhancing energy efficiency and reducing energy consumption. However, it is observed that some methodologies still suffer from inadequate data exploitation, which results in suboptimal policy performance. In this study, focusing on the residential energy management system, an innovative reinforcement learning method is proposed. This novel method fuses the asynchronous advantage actor-critic architecture with a familiarity-based experience replay mechanism, with the ambition of markedly improving learning efficiency and control performance. Numerical comparisons were made to justify the effectiveness of the method. Experimental results across diverse cases confirm that the proposed algorithm can effectively achieve optimal energy scheduling for residential sectors. Furthermore, the proposed methodology has demonstrated a notable reduction in grid interaction expenses, achieving a decrease of 27.03% and 16.38% relative to the other two scenarios. In comparison with the Proximal Policy Optimization (PPO) and Deep Q-Network (DQN) algorithms, the novel approach not only improves the average reward value post-convergence by 38.48% and 47.17%, respectively, but also significantly reduces the training duration by 81.19% within a multi-threaded computational environment.https://ieeexplore.ieee.org/document/10843226/Residential energy management systemdeep reinforcement learningasynchronous advantage actor-criticexperience replayoptimization control |
| spellingShingle | Jinjiang Zhang Qiang Lin Lu Wang Orefo Victor Arinze Zihan Hu Yantai Huang Residential Energy Management Method Based on the Proposed A3C-FER IEEE Access Residential energy management system deep reinforcement learning asynchronous advantage actor-critic experience replay optimization control |
| title | Residential Energy Management Method Based on the Proposed A3C-FER |
| title_full | Residential Energy Management Method Based on the Proposed A3C-FER |
| title_fullStr | Residential Energy Management Method Based on the Proposed A3C-FER |
| title_full_unstemmed | Residential Energy Management Method Based on the Proposed A3C-FER |
| title_short | Residential Energy Management Method Based on the Proposed A3C-FER |
| title_sort | residential energy management method based on the proposed a3c fer |
| topic | Residential energy management system deep reinforcement learning asynchronous advantage actor-critic experience replay optimization control |
| url | https://ieeexplore.ieee.org/document/10843226/ |
| work_keys_str_mv | AT jinjiangzhang residentialenergymanagementmethodbasedontheproposeda3cfer AT qianglin residentialenergymanagementmethodbasedontheproposeda3cfer AT luwang residentialenergymanagementmethodbasedontheproposeda3cfer AT orefovictorarinze residentialenergymanagementmethodbasedontheproposeda3cfer AT zihanhu residentialenergymanagementmethodbasedontheproposeda3cfer AT yantaihuang residentialenergymanagementmethodbasedontheproposeda3cfer |