Agile DQN: adaptive deep recurrent attention reinforcement learning for autonomous UAV obstacle avoidance

Abstract Unmanned Aerial Vehicle (UAV) obstacle avoidance in 3D environments demands sophisticated handling of high-dimensional inputs and effective state representations. Current Deep Reinforcement Learning (DRL) algorithms struggle to prioritize salient aspects of state representations and manage...

Full description

Saved in:
Bibliographic Details
Main Authors: Fadi AlMahamid, Katarina Grolinger
Format: Article
Language:English
Published: Nature Portfolio 2025-05-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-03287-y
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849731074757754880
author Fadi AlMahamid
Katarina Grolinger
author_facet Fadi AlMahamid
Katarina Grolinger
author_sort Fadi AlMahamid
collection DOAJ
description Abstract Unmanned Aerial Vehicle (UAV) obstacle avoidance in 3D environments demands sophisticated handling of high-dimensional inputs and effective state representations. Current Deep Reinforcement Learning (DRL) algorithms struggle to prioritize salient aspects of state representations and manage extensive state and action spaces, particularly in partially observable environments. Addressing these challenges, this paper proposes Agile DQN (AG-DQN), a novel algorithm that dynamically focuses on key visual features and robust Q-value estimation to enhance learning. The AG-DQN architecture synergizes several components—Glimpse Network, LSTM Recurrent Network, Emission Network, and Q-Network—to dynamically and selectively process crucial visual features, optimizing decision-making without processing the entire state. AG-DQN’s adaptive temporal attention strategy also adjusts to environmental changes, maintaining a balance between recent and past observations. Experimental results demonstrate AG-DQN’s improved performance over existing DRL methods, highlighting its potential in advancing autonomous UAV navigation and robotics.
format Article
id doaj-art-ee8097b8c28046afa89813c3d41a29d6
institution DOAJ
issn 2045-2322
language English
publishDate 2025-05-01
publisher Nature Portfolio
record_format Article
series Scientific Reports
spelling doaj-art-ee8097b8c28046afa89813c3d41a29d62025-08-20T03:08:40ZengNature PortfolioScientific Reports2045-23222025-05-0115111810.1038/s41598-025-03287-yAgile DQN: adaptive deep recurrent attention reinforcement learning for autonomous UAV obstacle avoidanceFadi AlMahamid0Katarina Grolinger1Department of Electrical and Computer Engineering, Western UniversityDepartment of Electrical and Computer Engineering, Western UniversityAbstract Unmanned Aerial Vehicle (UAV) obstacle avoidance in 3D environments demands sophisticated handling of high-dimensional inputs and effective state representations. Current Deep Reinforcement Learning (DRL) algorithms struggle to prioritize salient aspects of state representations and manage extensive state and action spaces, particularly in partially observable environments. Addressing these challenges, this paper proposes Agile DQN (AG-DQN), a novel algorithm that dynamically focuses on key visual features and robust Q-value estimation to enhance learning. The AG-DQN architecture synergizes several components—Glimpse Network, LSTM Recurrent Network, Emission Network, and Q-Network—to dynamically and selectively process crucial visual features, optimizing decision-making without processing the entire state. AG-DQN’s adaptive temporal attention strategy also adjusts to environmental changes, maintaining a balance between recent and past observations. Experimental results demonstrate AG-DQN’s improved performance over existing DRL methods, highlighting its potential in advancing autonomous UAV navigation and robotics.https://doi.org/10.1038/s41598-025-03287-yAutonomous unmanned aerial vehiclesDeep reinforcement learningAutonomous visual navigationAttention modelsDeep learningDeep Q-networks
spellingShingle Fadi AlMahamid
Katarina Grolinger
Agile DQN: adaptive deep recurrent attention reinforcement learning for autonomous UAV obstacle avoidance
Scientific Reports
Autonomous unmanned aerial vehicles
Deep reinforcement learning
Autonomous visual navigation
Attention models
Deep learning
Deep Q-networks
title Agile DQN: adaptive deep recurrent attention reinforcement learning for autonomous UAV obstacle avoidance
title_full Agile DQN: adaptive deep recurrent attention reinforcement learning for autonomous UAV obstacle avoidance
title_fullStr Agile DQN: adaptive deep recurrent attention reinforcement learning for autonomous UAV obstacle avoidance
title_full_unstemmed Agile DQN: adaptive deep recurrent attention reinforcement learning for autonomous UAV obstacle avoidance
title_short Agile DQN: adaptive deep recurrent attention reinforcement learning for autonomous UAV obstacle avoidance
title_sort agile dqn adaptive deep recurrent attention reinforcement learning for autonomous uav obstacle avoidance
topic Autonomous unmanned aerial vehicles
Deep reinforcement learning
Autonomous visual navigation
Attention models
Deep learning
Deep Q-networks
url https://doi.org/10.1038/s41598-025-03287-y
work_keys_str_mv AT fadialmahamid agiledqnadaptivedeeprecurrentattentionreinforcementlearningforautonomousuavobstacleavoidance
AT katarinagrolinger agiledqnadaptivedeeprecurrentattentionreinforcementlearningforautonomousuavobstacleavoidance