Deep Reinforcement Learning-Based Joint Routing and Capacity Optimization in an Aerial and Terrestrial Hybrid Wireless Network
As the airspace is experiencing an increasing number of low-altitude aircraft, the concept of spectrum sharing between aerial and terrestrial users emerges as a compelling solution to improve the spectrum utilization efficiency. In this paper, we consider a new Aerial and Terrestrial Hybrid Network...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2024-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10600704/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | As the airspace is experiencing an increasing number of low-altitude aircraft, the concept of spectrum sharing between aerial and terrestrial users emerges as a compelling solution to improve the spectrum utilization efficiency. In this paper, we consider a new Aerial and Terrestrial Hybrid Network (ATHN) comprising aerial vehicles (AVs), ground base stations (BSs), and terrestrial users (TUs). In this ATHN, AVs and BSs collaboratively form a multi-hop ad-hoc network with the objective of minimizing the average end-to-end (E2E) packet transmission delay. Meanwhile, the BSs and TUs form a terrestrial network aimed at maximizing the uplink and downlink sum capacity. Given the concept of spectrum sharing between aerial and terrestrial users in ATHN, we formulate a joint routing and capacity optimization (JRCO) problem, which is a multi-stage combinatorial problem subject to the curse of dimensionality. To address this problem, we propose a Deep Reinforcement Learning (DRL) based algorithm. Specifically, the Dueling Double Deep Q-Network (D3QN) structure is constructed to learn an optimal policy through trial and error. Extensive simulation results demonstrate the efficacy of our proposed solution. |
|---|---|
| ISSN: | 2169-3536 |