Electrical vehicle grid integration for demand response in distribution networks using reinforcement learning

Abstract Most utilities across the world already have demand response (DR) programs in place to incentivise consumers to reduce or shift their electricity consumption from peak periods to off‐peak hours usually in response to financial incentives. With the increasing electrification of vehicles, eme...

Full description

Saved in:
Bibliographic Details
Main Authors: Fayiz Alfaverh, Mouloud Denaï, Yichuang Sun
Format: Article
Language:English
Published: Wiley 2021-12-01
Series:IET Electrical Systems in Transportation
Subjects:
Online Access:https://doi.org/10.1049/els2.12030
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832559661203587072
author Fayiz Alfaverh
Mouloud Denaï
Yichuang Sun
author_facet Fayiz Alfaverh
Mouloud Denaï
Yichuang Sun
author_sort Fayiz Alfaverh
collection DOAJ
description Abstract Most utilities across the world already have demand response (DR) programs in place to incentivise consumers to reduce or shift their electricity consumption from peak periods to off‐peak hours usually in response to financial incentives. With the increasing electrification of vehicles, emerging technologies such as vehicle‐to‐grid (V2G) and vehicle‐to‐home (V2H) have the potential to offer a broad range of benefits and services to achieve more effective management of electricity demand. In this way, electric vehicles (EV) become distributed energy storage resources and can conceivably, in conjunction with other electricity storage solutions, contribute to DR and provide additional capacity to the grid when needed. Here, an effective DR approach for V2G and V2H energy management using Reinforcement Learning (RL) is proposed. Q‐learning, an RL strategy based on a reward mechanism, is used to make optimal decisions to charge or delay the charging of the EV battery pack and/or dispatch the stored electricity back to the grid without compromising the driving needs. Simulations are presented to demonstrate how the proposed DR strategy can effectively manage the charging/discharging schedule of the EV battery and how V2H and V2G can contribute to smooth the household load profile, minimise electricity bills and maximise revenue.
format Article
id doaj-art-e0b00a492a6249038aab0e9e1bf73a3d
institution Kabale University
issn 2042-9738
2042-9746
language English
publishDate 2021-12-01
publisher Wiley
record_format Article
series IET Electrical Systems in Transportation
spelling doaj-art-e0b00a492a6249038aab0e9e1bf73a3d2025-02-03T01:29:38ZengWileyIET Electrical Systems in Transportation2042-97382042-97462021-12-0111434836110.1049/els2.12030Electrical vehicle grid integration for demand response in distribution networks using reinforcement learningFayiz Alfaverh0Mouloud Denaï1Yichuang Sun2School of Physics, Engineering and Computer Science University of Hertfordshire Hatfield UKSchool of Physics, Engineering and Computer Science University of Hertfordshire Hatfield UKSchool of Physics, Engineering and Computer Science University of Hertfordshire Hatfield UKAbstract Most utilities across the world already have demand response (DR) programs in place to incentivise consumers to reduce or shift their electricity consumption from peak periods to off‐peak hours usually in response to financial incentives. With the increasing electrification of vehicles, emerging technologies such as vehicle‐to‐grid (V2G) and vehicle‐to‐home (V2H) have the potential to offer a broad range of benefits and services to achieve more effective management of electricity demand. In this way, electric vehicles (EV) become distributed energy storage resources and can conceivably, in conjunction with other electricity storage solutions, contribute to DR and provide additional capacity to the grid when needed. Here, an effective DR approach for V2G and V2H energy management using Reinforcement Learning (RL) is proposed. Q‐learning, an RL strategy based on a reward mechanism, is used to make optimal decisions to charge or delay the charging of the EV battery pack and/or dispatch the stored electricity back to the grid without compromising the driving needs. Simulations are presented to demonstrate how the proposed DR strategy can effectively manage the charging/discharging schedule of the EV battery and how V2H and V2G can contribute to smooth the household load profile, minimise electricity bills and maximise revenue.https://doi.org/10.1049/els2.12030energy storagedemand side managementpower gridssecondary cellsenergy management systemspower consumption
spellingShingle Fayiz Alfaverh
Mouloud Denaï
Yichuang Sun
Electrical vehicle grid integration for demand response in distribution networks using reinforcement learning
IET Electrical Systems in Transportation
energy storage
demand side management
power grids
secondary cells
energy management systems
power consumption
title Electrical vehicle grid integration for demand response in distribution networks using reinforcement learning
title_full Electrical vehicle grid integration for demand response in distribution networks using reinforcement learning
title_fullStr Electrical vehicle grid integration for demand response in distribution networks using reinforcement learning
title_full_unstemmed Electrical vehicle grid integration for demand response in distribution networks using reinforcement learning
title_short Electrical vehicle grid integration for demand response in distribution networks using reinforcement learning
title_sort electrical vehicle grid integration for demand response in distribution networks using reinforcement learning
topic energy storage
demand side management
power grids
secondary cells
energy management systems
power consumption
url https://doi.org/10.1049/els2.12030
work_keys_str_mv AT fayizalfaverh electricalvehiclegridintegrationfordemandresponseindistributionnetworksusingreinforcementlearning
AT moulouddenai electricalvehiclegridintegrationfordemandresponseindistributionnetworksusingreinforcementlearning
AT yichuangsun electricalvehiclegridintegrationfordemandresponseindistributionnetworksusingreinforcementlearning