An adaptive coverage method for dynamic wireless sensor network deployment using deep reinforcement learning
Abstract Coverage optimization stands as a foundational challenge in Wireless Sensor Networks (WSNs), exerting a critical influence on monitoring fidelity and holistic network efficacy. Constrained by the limited energy budgets of sensor nodes, the imperative to maximize network longevity while sust...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-08-01
|
| Series: | Scientific Reports |
| Subjects: | |
| Online Access: | https://doi.org/10.1038/s41598-025-16031-3 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849226440415903744 |
|---|---|
| author | Peng Zhou Mingqi Kan Wei Chen Yingchao Wang Bingyu Cao |
| author_facet | Peng Zhou Mingqi Kan Wei Chen Yingchao Wang Bingyu Cao |
| author_sort | Peng Zhou |
| collection | DOAJ |
| description | Abstract Coverage optimization stands as a foundational challenge in Wireless Sensor Networks (WSNs), exerting a critical influence on monitoring fidelity and holistic network efficacy. Constrained by the limited energy budgets of sensor nodes, the imperative to maximize network longevity while sustaining sufficient coverage has ascended to the forefront of research priorities. Traditional deployment methodologies frequently falter in complex topographies and dynamic operational environments, encountering difficulties in striking an optimal equilibrium between coverage quality and energy efficiency. To mitigate these inherent limitations, this paper introduces ACDRL (Adaptive Coverage-Aware Deployment based on Deep Reinforcement Learning)—a novel strategy that enables intelligent, self-optimizing node placement in WSNs through deep reinforcement learning paradigms. Our proposed framework establishes a sophisticated deep reinforcement learning architecture integrating a multi-objective reward mechanism and hierarchical state representation, which innovatively resolves the dual predicaments of coverage optimization and energy balancing in intricate scenarios. Extensive simulation results validate that ACDRL consistently outperforms state-of-the-art approaches by maintaining superior coverage ratios, significantly extending network operational lifespan, and demonstrating enhanced adaptability in high-density deployment scenarios. |
| format | Article |
| id | doaj-art-a0f3f00722704308a64a1015ef72ff3a |
| institution | Kabale University |
| issn | 2045-2322 |
| language | English |
| publishDate | 2025-08-01 |
| publisher | Nature Portfolio |
| record_format | Article |
| series | Scientific Reports |
| spelling | doaj-art-a0f3f00722704308a64a1015ef72ff3a2025-08-24T11:20:37ZengNature PortfolioScientific Reports2045-23222025-08-0115111410.1038/s41598-025-16031-3An adaptive coverage method for dynamic wireless sensor network deployment using deep reinforcement learningPeng Zhou0Mingqi Kan1Wei Chen2Yingchao Wang3Bingyu Cao4School of Information Science and Engineering, Xinjiang College of Science & TechnologySchool of Information Science and Engineering, Xinjiang College of Science & TechnologySchool of Information Science and Engineering, Xinjiang College of Science & TechnologySchool of Information Science and Engineering, Xinjiang College of Science & TechnologySchool of Information Science and Engineering, Xinjiang College of Science & TechnologyAbstract Coverage optimization stands as a foundational challenge in Wireless Sensor Networks (WSNs), exerting a critical influence on monitoring fidelity and holistic network efficacy. Constrained by the limited energy budgets of sensor nodes, the imperative to maximize network longevity while sustaining sufficient coverage has ascended to the forefront of research priorities. Traditional deployment methodologies frequently falter in complex topographies and dynamic operational environments, encountering difficulties in striking an optimal equilibrium between coverage quality and energy efficiency. To mitigate these inherent limitations, this paper introduces ACDRL (Adaptive Coverage-Aware Deployment based on Deep Reinforcement Learning)—a novel strategy that enables intelligent, self-optimizing node placement in WSNs through deep reinforcement learning paradigms. Our proposed framework establishes a sophisticated deep reinforcement learning architecture integrating a multi-objective reward mechanism and hierarchical state representation, which innovatively resolves the dual predicaments of coverage optimization and energy balancing in intricate scenarios. Extensive simulation results validate that ACDRL consistently outperforms state-of-the-art approaches by maintaining superior coverage ratios, significantly extending network operational lifespan, and demonstrating enhanced adaptability in high-density deployment scenarios.https://doi.org/10.1038/s41598-025-16031-3Coverage optimizationWireless sensor networksDeep reinforcement learningHigh-density deployment |
| spellingShingle | Peng Zhou Mingqi Kan Wei Chen Yingchao Wang Bingyu Cao An adaptive coverage method for dynamic wireless sensor network deployment using deep reinforcement learning Scientific Reports Coverage optimization Wireless sensor networks Deep reinforcement learning High-density deployment |
| title | An adaptive coverage method for dynamic wireless sensor network deployment using deep reinforcement learning |
| title_full | An adaptive coverage method for dynamic wireless sensor network deployment using deep reinforcement learning |
| title_fullStr | An adaptive coverage method for dynamic wireless sensor network deployment using deep reinforcement learning |
| title_full_unstemmed | An adaptive coverage method for dynamic wireless sensor network deployment using deep reinforcement learning |
| title_short | An adaptive coverage method for dynamic wireless sensor network deployment using deep reinforcement learning |
| title_sort | adaptive coverage method for dynamic wireless sensor network deployment using deep reinforcement learning |
| topic | Coverage optimization Wireless sensor networks Deep reinforcement learning High-density deployment |
| url | https://doi.org/10.1038/s41598-025-16031-3 |
| work_keys_str_mv | AT pengzhou anadaptivecoveragemethodfordynamicwirelesssensornetworkdeploymentusingdeepreinforcementlearning AT mingqikan anadaptivecoveragemethodfordynamicwirelesssensornetworkdeploymentusingdeepreinforcementlearning AT weichen anadaptivecoveragemethodfordynamicwirelesssensornetworkdeploymentusingdeepreinforcementlearning AT yingchaowang anadaptivecoveragemethodfordynamicwirelesssensornetworkdeploymentusingdeepreinforcementlearning AT bingyucao anadaptivecoveragemethodfordynamicwirelesssensornetworkdeploymentusingdeepreinforcementlearning AT pengzhou adaptivecoveragemethodfordynamicwirelesssensornetworkdeploymentusingdeepreinforcementlearning AT mingqikan adaptivecoveragemethodfordynamicwirelesssensornetworkdeploymentusingdeepreinforcementlearning AT weichen adaptivecoveragemethodfordynamicwirelesssensornetworkdeploymentusingdeepreinforcementlearning AT yingchaowang adaptivecoveragemethodfordynamicwirelesssensornetworkdeploymentusingdeepreinforcementlearning AT bingyucao adaptivecoveragemethodfordynamicwirelesssensornetworkdeploymentusingdeepreinforcementlearning |