Deep Reinforcement Learning-Based Deployment Method for Emergency Communication Network

Emergency communication networks play a crucial role in disaster relief operations. Current automated deployment strategies based on rule-driven or heuristic algorithms struggle to adapt to the dynamic and heterogeneous network environments in disaster scenarios, while manual command deployment is c...

Full description

Saved in:
Bibliographic Details
Main Authors: Bo Huang, Yiwei Lu, Hao Ma, Changsheng Yin, Ruopeng Yang, Yongqi Shi, Yu Tao, Yongqi Wen, Yihao Zhong
Format: Article
Language:English
Published: MDPI AG 2025-07-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/14/7961
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Emergency communication networks play a crucial role in disaster relief operations. Current automated deployment strategies based on rule-driven or heuristic algorithms struggle to adapt to the dynamic and heterogeneous network environments in disaster scenarios, while manual command deployment is constrained by personnel expertise and response time requirements, leading to suboptimal trade-offs between deployment efficiency and reliability. To address these challenges, this study proposes a novel deep reinforcement learning framework with a fully convolutional value network architecture, which achieves breakthroughs in multi-dimensional spatial decision-making through end-to-end feature extraction. This design effectively mitigates the “curse of dimensionality” inherent in traditional reinforcement learning methods for topology planning. Experimental results demonstrate that the proposed method effectively accomplishes the planning tasks of emergency communication hub elements, significantly improving deployment efficiency while maintaining robustness in complex environments.
ISSN:2076-3417