Voltage Control for Distribution Networks Based on Large Language Model-Assisted Deep Reinforcement Learning

With the continuous integration of large-scale distributed energy resources into distribution networks, numerous challenges arise regarding security, stability, and economic performance, particularly voltage violations and increased network losses. Furthermore, existing deep reinforcement learning (...

Full description

Saved in:
Bibliographic Details
Main Authors: Limei Yan, Chongyang Cheng
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10979848/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the continuous integration of large-scale distributed energy resources into distribution networks, numerous challenges arise regarding security, stability, and economic performance, particularly voltage violations and increased network losses. Furthermore, existing deep reinforcement learning (DRL) methods often rely on extensive real-world operational data for agent training. Yet, the lack of diversity in the collected data can significantly limit the generalization ability of agents under varying operating conditions. This paper proposes a regional voltage optimization control strategy for distribution networks to address these issues based on DRL supported by large language model (LLM). By integrating LLM technologies with DRL, the approach leverages prompt engineering to guide large-language models in generating customized datasets for DRL agent training, enabling data augmentation. This reduces the dependence on real-world data while improving the generalizability of agents. The proposed control strategy was validated on modified IEEE 33-bus and 123-bus distribution systems. The experimental results effectively mitigate voltage violations and reduce network losses while exhibiting strong robustness and generalization under various operating conditions.
ISSN:2169-3536