Energy Management Strategy for Fuel Cell Vehicles Based on Deep Transfer Reinforcement Learning

Deep reinforcement learning has been widely applied in energy management strategies (EMS) for fuel cell vehicles because of its excellent performance in the face of complex environments. However, when driving conditions change, deep reinforcement learning-based EMS needs to be retrained to adapt to...

Full description

Saved in:
Bibliographic Details
Main Authors: Ziye Wang, Ren He, Donghai Hu, Dagang Lu
Format: Article
Language:English
Published: MDPI AG 2025-04-01
Series:Energies
Subjects:
Online Access:https://www.mdpi.com/1996-1073/18/9/2192
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep reinforcement learning has been widely applied in energy management strategies (EMS) for fuel cell vehicles because of its excellent performance in the face of complex environments. However, when driving conditions change, deep reinforcement learning-based EMS needs to be retrained to adapt to the new data distribution, which is a time-consuming process. To address this limitation and enhance the generalization ability of EMS, this paper proposes a deep transfer reinforcement learning framework. First, we designed a DDPG algorithm combined with prioritized experience replay (PER) as the research algorithm and trained a PER–DDPG-based EMS (defined as the source domain) using multiple driving cycles. Then, transfer learning was used when training the EMS (defined as the target domain) using a new driving cycle, i.e., the neural network parameters in the source domain model were reused to help initialize the target domain model. The simulation results show that the energy management strategy combined with transfer learning not only converges faster (improved by 59.09%), but also shows stronger adaptability when faced with new and more complex driving cycles, compared with not using transfer learning and having the model retrained.
ISSN:1996-1073