Improved Visual SLAM Algorithm Based on Dynamic Scenes

This work presents a novel RGB-D dynamic simultaneous localization and mapping (SLAM) method that improves accuracy, stability, and efficiency of localization while relying on deep learning in a dynamic environment, in contrast to traditional static scene-based visual SLAM methods. Based on the clas...

Full description

Saved in:
Bibliographic Details
Main Authors: Jinxing Niu, Ziqi Chen, Tao Zhang, Shiyu Zheng
Format: Article
Language:English
Published: MDPI AG 2024-11-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/14/22/10727
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850267345130356736
author Jinxing Niu
Ziqi Chen
Tao Zhang
Shiyu Zheng
author_facet Jinxing Niu
Ziqi Chen
Tao Zhang
Shiyu Zheng
author_sort Jinxing Niu
collection DOAJ
description This work presents a novel RGB-D dynamic simultaneous localization and mapping (SLAM) method that improves accuracy, stability, and efficiency of localization while relying on deep learning in a dynamic environment, in contrast to traditional static scene-based visual SLAM methods. Based on the classic framework of traditional visual SLAM, we propose a method that replaces the traditional feature extraction method with a convolutional neural network approach, aiming to enhance the accuracy of feature extraction and localization, as well as to improve the algorithm’s ability to capture and represent the characteristics of the entire scene. Subsequently, the semantic segmentation thread was utilized in a target detection network combined with geometric methods to identify potential dynamic areas in the image and generate masks for dynamic objects. Finally, the standard deviation of the depth information of potential dynamic points was calculated to identify true dynamic feature points, to guarantee that static feature points were used for position estimation. We performed experiments based on the public datasets to validate the feasibility of the proposed algorithm. The experimental results indicate that the improved SLAM algorithm, which boasts a reduction in absolute trajectory error (ATE) by approximately 97% compared to traditional static visual SLAM and about 20% compared to traditional dynamic visual SLAM, also exhibited a 68% decrease in computation time compared to well-known dynamic visual SLAM, thereby possessing absolute advantages in both positioning accuracy and operational efficiency.
format Article
id doaj-art-663c05a8aa94404c8a89eb15e18544e0
institution OA Journals
issn 2076-3417
language English
publishDate 2024-11-01
publisher MDPI AG
record_format Article
series Applied Sciences
spelling doaj-art-663c05a8aa94404c8a89eb15e18544e02025-08-20T01:53:49ZengMDPI AGApplied Sciences2076-34172024-11-0114221072710.3390/app142210727Improved Visual SLAM Algorithm Based on Dynamic ScenesJinxing Niu0Ziqi Chen1Tao Zhang2Shiyu Zheng3School of Mechanical Engineering, North China University of Water Resources and Electric Power, Zhengzhou 450011, ChinaSchool of Mechanical Engineering, North China University of Water Resources and Electric Power, Zhengzhou 450011, ChinaSchool of Mechanical Engineering, North China University of Water Resources and Electric Power, Zhengzhou 450011, ChinaSchool of Mechanical Engineering, North China University of Water Resources and Electric Power, Zhengzhou 450011, ChinaThis work presents a novel RGB-D dynamic simultaneous localization and mapping (SLAM) method that improves accuracy, stability, and efficiency of localization while relying on deep learning in a dynamic environment, in contrast to traditional static scene-based visual SLAM methods. Based on the classic framework of traditional visual SLAM, we propose a method that replaces the traditional feature extraction method with a convolutional neural network approach, aiming to enhance the accuracy of feature extraction and localization, as well as to improve the algorithm’s ability to capture and represent the characteristics of the entire scene. Subsequently, the semantic segmentation thread was utilized in a target detection network combined with geometric methods to identify potential dynamic areas in the image and generate masks for dynamic objects. Finally, the standard deviation of the depth information of potential dynamic points was calculated to identify true dynamic feature points, to guarantee that static feature points were used for position estimation. We performed experiments based on the public datasets to validate the feasibility of the proposed algorithm. The experimental results indicate that the improved SLAM algorithm, which boasts a reduction in absolute trajectory error (ATE) by approximately 97% compared to traditional static visual SLAM and about 20% compared to traditional dynamic visual SLAM, also exhibited a 68% decrease in computation time compared to well-known dynamic visual SLAM, thereby possessing absolute advantages in both positioning accuracy and operational efficiency.https://www.mdpi.com/2076-3417/14/22/10727visual SLAMdynamic environmentfeature extractionobject detection
spellingShingle Jinxing Niu
Ziqi Chen
Tao Zhang
Shiyu Zheng
Improved Visual SLAM Algorithm Based on Dynamic Scenes
Applied Sciences
visual SLAM
dynamic environment
feature extraction
object detection
title Improved Visual SLAM Algorithm Based on Dynamic Scenes
title_full Improved Visual SLAM Algorithm Based on Dynamic Scenes
title_fullStr Improved Visual SLAM Algorithm Based on Dynamic Scenes
title_full_unstemmed Improved Visual SLAM Algorithm Based on Dynamic Scenes
title_short Improved Visual SLAM Algorithm Based on Dynamic Scenes
title_sort improved visual slam algorithm based on dynamic scenes
topic visual SLAM
dynamic environment
feature extraction
object detection
url https://www.mdpi.com/2076-3417/14/22/10727
work_keys_str_mv AT jinxingniu improvedvisualslamalgorithmbasedondynamicscenes
AT ziqichen improvedvisualslamalgorithmbasedondynamicscenes
AT taozhang improvedvisualslamalgorithmbasedondynamicscenes
AT shiyuzheng improvedvisualslamalgorithmbasedondynamicscenes