Improved double DQN with deep reinforcement learning for UAV indoor autonomous obstacle avoidance

Abstract Aiming at the problems of insufficient autonomous obstacle avoidance performance of UAVs in complex indoor environments, an improved Double DQN algorithm based on deep reinforcement learning is proposed. The algorithm enhances the perception and learning capabilities by optimizing the netwo...

Full description

Saved in:
Bibliographic Details
Main Authors: Ruiqi Yu, Qingdang Li, Jiewei Ji, Tingting Wu, Jian Mao, Shun Liu, Zhen Sun
Format: Article
Language:English
Published: Nature Portfolio 2025-08-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-02356-6
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Aiming at the problems of insufficient autonomous obstacle avoidance performance of UAVs in complex indoor environments, an improved Double DQN algorithm based on deep reinforcement learning is proposed. The algorithm enhances the perception and learning capabilities by optimizing the network model and employs a dynamic exploration strategy that encourages exploration in the early stage and reduces it later to accelerate convergence and improve efficiency. Simulation experiments in two scenarios of varying complexity, using an indoor simulation environment built with AirSim and UE4(Unreal Engine 4), show that in the simpler scenario, the average cumulative reward increased by 22.88%, the maximum reward increased by 101.56%, the average safe flight distance increased by 23.17%, and the maximum safe flight distance by 105.62%. In the more complex scenario, the average cumulative reward increased by 2.66%, the maximum reward increased by 88.77%, the average safe flight distance increased by 2.05%, and the maximum safe flight distance by 84.68%.
ISSN:2045-2322