A Custom Reinforcement Learning Environment for Hybrid Renewable Energy Systems: Design and Implementation

We present HybridEnergyEnv, an open-source, Gym-style simulation environment designed for reinforcement learning (RL) research in hybrid renewable energy systems (HRES) combining wind, solar, and battery storage. The environment incorporates realistic component models, including intermittent renewab...

Full description

Saved in:
Bibliographic Details
Main Authors: Dalton F. Guedes Filho, Marcelo A. Moret, Erick G. Sperandio Nascimento
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11097283/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849395502645248000
author Dalton F. Guedes Filho
Marcelo A. Moret
Erick G. Sperandio Nascimento
author_facet Dalton F. Guedes Filho
Marcelo A. Moret
Erick G. Sperandio Nascimento
author_sort Dalton F. Guedes Filho
collection DOAJ
description We present HybridEnergyEnv, an open-source, Gym-style simulation environment designed for reinforcement learning (RL) research in hybrid renewable energy systems (HRES) combining wind, solar, and battery storage. The environment incorporates realistic component models, including intermittent renewable generation profiles, a synthetic electricity price signal inversely correlated with renewable availability, and a detailed Battery Energy Storage System (BESS) model accounting for state-of-charge (SoC) dynamics, self-discharge, efficiency losses, thermal derating, and rainflow-based capacity degradation. To validate the framework, we evaluate three dispatch strategies implemented with algorithms available in the Stable-Baselines3 (SB3) library: Proximal Policy Optimization (PPO), Advantage Actor-Critic (A2C), and Double Deep Q-Network (DDQN). Results show that DRL-based policies increase operational revenue by up to 10.05% and reduce curtailment by up to 84.60% compared to the no-storage baseline. Additionally, DDQN achieves the longest episode durations and highest rewards during training, indicating greater stability under strict curtailment constraints. We describe the environment architecture, component models, and API, demonstrating the potential of HybridEnergyEnv as a high-fidelity, extensible platform for the development of intelligent, degradation-aware dispatch strategies in modern power systems.
format Article
id doaj-art-45fa2bb20efc498d83ce064a8b456f6e
institution Kabale University
issn 2169-3536
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-45fa2bb20efc498d83ce064a8b456f6e2025-08-20T03:39:36ZengIEEEIEEE Access2169-35362025-01-011313398413399310.1109/ACCESS.2025.359306411097283A Custom Reinforcement Learning Environment for Hybrid Renewable Energy Systems: Design and ImplementationDalton F. Guedes Filho0https://orcid.org/0009-0007-1787-6606Marcelo A. Moret1Erick G. Sperandio Nascimento2https://orcid.org/0000-0003-2219-0290Stricto Sensu Department, SENAI CIMATEC University, Salvador, Bahia, BrazilStricto Sensu Department, SENAI CIMATEC University, Salvador, Bahia, BrazilStricto Sensu Department, SENAI CIMATEC University, Salvador, Bahia, BrazilWe present HybridEnergyEnv, an open-source, Gym-style simulation environment designed for reinforcement learning (RL) research in hybrid renewable energy systems (HRES) combining wind, solar, and battery storage. The environment incorporates realistic component models, including intermittent renewable generation profiles, a synthetic electricity price signal inversely correlated with renewable availability, and a detailed Battery Energy Storage System (BESS) model accounting for state-of-charge (SoC) dynamics, self-discharge, efficiency losses, thermal derating, and rainflow-based capacity degradation. To validate the framework, we evaluate three dispatch strategies implemented with algorithms available in the Stable-Baselines3 (SB3) library: Proximal Policy Optimization (PPO), Advantage Actor-Critic (A2C), and Double Deep Q-Network (DDQN). Results show that DRL-based policies increase operational revenue by up to 10.05% and reduce curtailment by up to 84.60% compared to the no-storage baseline. Additionally, DDQN achieves the longest episode durations and highest rewards during training, indicating greater stability under strict curtailment constraints. We describe the environment architecture, component models, and API, demonstrating the potential of HybridEnergyEnv as a high-fidelity, extensible platform for the development of intelligent, degradation-aware dispatch strategies in modern power systems.https://ieeexplore.ieee.org/document/11097283/OpenAI gym environmentdeep reinforcement learninghybrid renewable energywind energysolar energybattery energy storage system
spellingShingle Dalton F. Guedes Filho
Marcelo A. Moret
Erick G. Sperandio Nascimento
A Custom Reinforcement Learning Environment for Hybrid Renewable Energy Systems: Design and Implementation
IEEE Access
OpenAI gym environment
deep reinforcement learning
hybrid renewable energy
wind energy
solar energy
battery energy storage system
title A Custom Reinforcement Learning Environment for Hybrid Renewable Energy Systems: Design and Implementation
title_full A Custom Reinforcement Learning Environment for Hybrid Renewable Energy Systems: Design and Implementation
title_fullStr A Custom Reinforcement Learning Environment for Hybrid Renewable Energy Systems: Design and Implementation
title_full_unstemmed A Custom Reinforcement Learning Environment for Hybrid Renewable Energy Systems: Design and Implementation
title_short A Custom Reinforcement Learning Environment for Hybrid Renewable Energy Systems: Design and Implementation
title_sort custom reinforcement learning environment for hybrid renewable energy systems design and implementation
topic OpenAI gym environment
deep reinforcement learning
hybrid renewable energy
wind energy
solar energy
battery energy storage system
url https://ieeexplore.ieee.org/document/11097283/
work_keys_str_mv AT daltonfguedesfilho acustomreinforcementlearningenvironmentforhybridrenewableenergysystemsdesignandimplementation
AT marceloamoret acustomreinforcementlearningenvironmentforhybridrenewableenergysystemsdesignandimplementation
AT erickgsperandionascimento acustomreinforcementlearningenvironmentforhybridrenewableenergysystemsdesignandimplementation
AT daltonfguedesfilho customreinforcementlearningenvironmentforhybridrenewableenergysystemsdesignandimplementation
AT marceloamoret customreinforcementlearningenvironmentforhybridrenewableenergysystemsdesignandimplementation
AT erickgsperandionascimento customreinforcementlearningenvironmentforhybridrenewableenergysystemsdesignandimplementation