Enhancing Power Allocation in DAS: A Hybrid Machine Learning and Reinforcement Learning Model

This paper presents a novel hybrid approach to optimize downlink power allocation in a Distributed Antenna System (DAS) with many Remote Access Units (RAUs) and User Equipment (UEs) randomly distributed in a single cell. The proposed method combines Machine Learning (ML) for predictive modeling with...

Full description

Saved in:
Bibliographic Details
Main Authors: S. Gnanasekar, K. C. Sriharipriya
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10926833/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper presents a novel hybrid approach to optimize downlink power allocation in a Distributed Antenna System (DAS) with many Remote Access Units (RAUs) and User Equipment (UEs) randomly distributed in a single cell. The proposed method combines Machine Learning (ML) for predictive modeling with Multi-Agent Reinforcement Learning (MARL) for real-time coordination among RAUs. ML models utilize historical data to forecast optimal power levels, while MARL agents dynamically adjust power allocation based on real-time conditions, aiming to enhance spectral efficiency, minimize interference, and optimize energy efficiency. The hybrid approach achieves a mean Spectral Efficiency (SE) of 0.855 bits/s/Hz and a mean Energy Efficiency (EE) of 1.210 bits/Joule, significantly outperforming traditional optimization (mean SE: 0.700, mean EE: 1.00) and the k-NN algorithm (mean SE: 0.725, mean EE: 1.105). Unlike existing approaches, our method offers continuous learning and hierarchical control, adapting effectively to varying network dynamics. Simulation results demonstrate the hybrid approach’s superiority in diverse scenarios, underlining its potential for practical implementation. The study concludes with insights into the synergistic benefits of integrating ML and MARL in DAS environments and suggests directions for future research.
ISSN:2169-3536