A Centralized–Distributed Joint Routing Algorithm for LEO Satellite Constellations Based on Multi-Agent Reinforcement Learning
Designing routing algorithms for Low Earth Orbit (LEO) satellite networks poses a significant challenge due to their high dynamics, frequent link failures, and unevenly distributed traffic. Existing studies predominantly focus on shortest-path solutions, which compute minimum-delay paths using globa...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-04-01
|
| Series: | Applied Sciences |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2076-3417/15/9/4664 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850155756582600704 |
|---|---|
| author | Licheng Xia Baojun Lin Shuai Zhao Yanchun Zhao |
| author_facet | Licheng Xia Baojun Lin Shuai Zhao Yanchun Zhao |
| author_sort | Licheng Xia |
| collection | DOAJ |
| description | Designing routing algorithms for Low Earth Orbit (LEO) satellite networks poses a significant challenge due to their high dynamics, frequent link failures, and unevenly distributed traffic. Existing studies predominantly focus on shortest-path solutions, which compute minimum-delay paths using global topology information but often neglect the impact of traffic load on routing performance and struggle to adapt to rapid link-state variations. In this regard, we propose a Multi-Agent Reinforcement Learning-Based Joint Routing (MARL-JR) algorithm, which integrates centralized and distributed routing algorithms. MARL-JR combines the accuracy of centralized methods with the responsiveness of distributed approaches in handling dynamic disruptions. In MARL-JR, ground stations initialize Q-tables and upload them to satellites, reducing onboard computational overhead while enhancing routing performance. Compared to traditional centralized algorithms, MARL-JR achieves faster link-state awareness and adaptation; compared to distributed algorithms, it delivers superior initial performance due to optimized pre-training. Experimental results demonstrate that MARL-JR outperforms both Q-Routing (QR) and DR-BM algorithms in average delay, packet loss rate, and load-balancing efficiency. |
| format | Article |
| id | doaj-art-1f731b258a194c07b311b0cd9433f643 |
| institution | OA Journals |
| issn | 2076-3417 |
| language | English |
| publishDate | 2025-04-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Applied Sciences |
| spelling | doaj-art-1f731b258a194c07b311b0cd9433f6432025-08-20T02:24:47ZengMDPI AGApplied Sciences2076-34172025-04-01159466410.3390/app15094664A Centralized–Distributed Joint Routing Algorithm for LEO Satellite Constellations Based on Multi-Agent Reinforcement LearningLicheng Xia0Baojun Lin1Shuai Zhao2Yanchun Zhao3School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, ChinaInnovation Academy for Microsatellites of CAS, Shanghai 201304, ChinaInnovation Academy for Microsatellites of CAS, Shanghai 201304, ChinaInnovation Academy for Microsatellites of CAS, Shanghai 201304, ChinaDesigning routing algorithms for Low Earth Orbit (LEO) satellite networks poses a significant challenge due to their high dynamics, frequent link failures, and unevenly distributed traffic. Existing studies predominantly focus on shortest-path solutions, which compute minimum-delay paths using global topology information but often neglect the impact of traffic load on routing performance and struggle to adapt to rapid link-state variations. In this regard, we propose a Multi-Agent Reinforcement Learning-Based Joint Routing (MARL-JR) algorithm, which integrates centralized and distributed routing algorithms. MARL-JR combines the accuracy of centralized methods with the responsiveness of distributed approaches in handling dynamic disruptions. In MARL-JR, ground stations initialize Q-tables and upload them to satellites, reducing onboard computational overhead while enhancing routing performance. Compared to traditional centralized algorithms, MARL-JR achieves faster link-state awareness and adaptation; compared to distributed algorithms, it delivers superior initial performance due to optimized pre-training. Experimental results demonstrate that MARL-JR outperforms both Q-Routing (QR) and DR-BM algorithms in average delay, packet loss rate, and load-balancing efficiency.https://www.mdpi.com/2076-3417/15/9/4664satellite networklow earth orbitreinforcement learningdistributed routing |
| spellingShingle | Licheng Xia Baojun Lin Shuai Zhao Yanchun Zhao A Centralized–Distributed Joint Routing Algorithm for LEO Satellite Constellations Based on Multi-Agent Reinforcement Learning Applied Sciences satellite network low earth orbit reinforcement learning distributed routing |
| title | A Centralized–Distributed Joint Routing Algorithm for LEO Satellite Constellations Based on Multi-Agent Reinforcement Learning |
| title_full | A Centralized–Distributed Joint Routing Algorithm for LEO Satellite Constellations Based on Multi-Agent Reinforcement Learning |
| title_fullStr | A Centralized–Distributed Joint Routing Algorithm for LEO Satellite Constellations Based on Multi-Agent Reinforcement Learning |
| title_full_unstemmed | A Centralized–Distributed Joint Routing Algorithm for LEO Satellite Constellations Based on Multi-Agent Reinforcement Learning |
| title_short | A Centralized–Distributed Joint Routing Algorithm for LEO Satellite Constellations Based on Multi-Agent Reinforcement Learning |
| title_sort | centralized distributed joint routing algorithm for leo satellite constellations based on multi agent reinforcement learning |
| topic | satellite network low earth orbit reinforcement learning distributed routing |
| url | https://www.mdpi.com/2076-3417/15/9/4664 |
| work_keys_str_mv | AT lichengxia acentralizeddistributedjointroutingalgorithmforleosatelliteconstellationsbasedonmultiagentreinforcementlearning AT baojunlin acentralizeddistributedjointroutingalgorithmforleosatelliteconstellationsbasedonmultiagentreinforcementlearning AT shuaizhao acentralizeddistributedjointroutingalgorithmforleosatelliteconstellationsbasedonmultiagentreinforcementlearning AT yanchunzhao acentralizeddistributedjointroutingalgorithmforleosatelliteconstellationsbasedonmultiagentreinforcementlearning AT lichengxia centralizeddistributedjointroutingalgorithmforleosatelliteconstellationsbasedonmultiagentreinforcementlearning AT baojunlin centralizeddistributedjointroutingalgorithmforleosatelliteconstellationsbasedonmultiagentreinforcementlearning AT shuaizhao centralizeddistributedjointroutingalgorithmforleosatelliteconstellationsbasedonmultiagentreinforcementlearning AT yanchunzhao centralizeddistributedjointroutingalgorithmforleosatelliteconstellationsbasedonmultiagentreinforcementlearning |