RGB-D camera and graph neural network-based SLAM for dynamic and low-texture environments

Abstract Unmanned systems, such as UAVs, play critical roles in various sectors, including agriculture, logistics, and military applications. The operational effectiveness of unmanned systems in GPS-denied environments, particularly in complex terrains such as subterranean structures and narrow cany...

Full description

Saved in:
Bibliographic Details
Main Authors: Xiaoran Liu, Xiaowei Zhang, Xiaochen Yan, Peng Liu, Haksrun Lao
Format: Article
Language:English
Published: Nature Portfolio 2025-08-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-025-12978-5
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Unmanned systems, such as UAVs, play critical roles in various sectors, including agriculture, logistics, and military applications. The operational effectiveness of unmanned systems in GPS-denied environments, particularly in complex terrains such as subterranean structures and narrow canyons, faces significant technical constraints due to positioning inaccuracies. To address these challenges, this study proposes an enhanced SLAM framework integrating RGB-D sensing with a novel Depth-aware Point-Line Attentional Graph Neural Network (DPLAGNN). The core contributions include:(1)A feature fusion mechanism combining geometric primitives with attention-weighted descriptors;(2)Dynamic scene adaptation through spatiotemporal feature consistency verification;(3)Multi-modal optimization for low-texture environment mapping. Building upon the ORB-SLAM3 architecture, our implementation demonstrates improved robustness in feature matching and localization consistency across three benchmark datasets. Comparative evaluations reveal measurable enhancements in both positioning accuracy and mapping reliability under challenging environmental conditions.
ISSN:2045-2322