Deep Reinforcement Learning Method for Wireless Video Transmission Based on Large Deviations

In scalable video transmission research, the video transmission process is commonly modeled as a Markov decision process, where deep reinforcement learning (DRL) methods are employed to optimize the wireless transmission of scalable videos. Furthermore, the adaptive DRL algorithm can address the ene...

Full description

Saved in:
Bibliographic Details
Main Authors: Yongxiao Xie, Shian Song
Format: Article
Language:English
Published: MDPI AG 2025-07-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/13/15/2434
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In scalable video transmission research, the video transmission process is commonly modeled as a Markov decision process, where deep reinforcement learning (DRL) methods are employed to optimize the wireless transmission of scalable videos. Furthermore, the adaptive DRL algorithm can address the energy shortage problem caused by the uncertainty of energy capture and accumulated storage, thereby reducing video interruptions and enhancing user experience. To further optimize resources in wireless energy transmission and tackle the challenge of balancing exploration and exploitation in the DRL algorithm, this paper develops an adaptive DRL algorithm that extends classical DRL frameworks by integrating dropout techniques during both the training and prediction processes. Moreover, to address the issue of continuous negative rewards, which are often attributed to incomplete training in the wireless video transmission DRL algorithm, this paper introduces the Cramér large deviation principle for specific discrimination. It identifies the optimal negative reward frequency boundary and minimizes the probability of misjudgment regarding continuous negative rewards. Finally, experimental validation is performed using the 2048-game environment that simulates wireless scalable video transmission conditions. The results demonstrate that the adaptive DRL algorithm described in this paper achieves superior convergence speed and higher cumulative rewards compared to the classical DRL approaches.
ISSN:2227-7390