A Robust Approach to Vision-Based Terrain-Aided Localization

Terrain-aided navigation, which combines radar altitude with a digital terrain map (DTM), was developed before the era of the Global Positioning System to prevent error growth resulting from inertial navigation. Recently, cameras and substantial computational power have become ubiquitous in flying p...

Full description

Saved in:
Bibliographic Details
Main Authors: Dan Navon, Ehud Rivlin, Hector Rotstein
Format: Article
Language:English
Published: Institute of Navigation 2025-02-01
Series:Navigation
Online Access:https://navi.ion.org/content/72/1/navi.683
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850130292715552768
author Dan Navon
Ehud Rivlin
Hector Rotstein
author_facet Dan Navon
Ehud Rivlin
Hector Rotstein
author_sort Dan Navon
collection DOAJ
description Terrain-aided navigation, which combines radar altitude with a digital terrain map (DTM), was developed before the era of the Global Positioning System to prevent error growth resulting from inertial navigation. Recently, cameras and substantial computational power have become ubiquitous in flying platforms, prompting interest in studying whether the radar altimeter can be replaced by a visual sensor. This paper presents a novel approach to vision-based terrain-aided localization by revisiting the correspondence and DTM (C-DTM) problem. We demonstrate that we can simplify the C-DTM problem by dividing it into a structure-from-motion (SFM) problem and then anchoring the solution to the terrain. The SFM problem can be solved using existing techniques such as feature detection, matching, and triangulation wrapped with a bundle adjustment algorithm. Anchoring is achieved by matching the point cloud to the terrain using ray-tracing and a variation of the iterative closest point method. One of the advantages of this two-step approach is that an innovative outlier filtering scheme can be included between the two stages to enhance overall robustness. The resulting algorithm consistently demonstrated high precision and statistical independence in the presence of initial errors across various simulations. The impact of different filtering methods was also studied, showing an improvement of 50% compared with the unfiltered case. The new algorithm has the potential to improve localization in real-world scenarios, making it a suitable candidate for pairing with an inertial navigation system and a Kalman f ilter to construct a comprehensive navigation system.
format Article
id doaj-art-45bb855947de45e1b7bce8d9de9c7907
institution OA Journals
issn 2161-4296
language English
publishDate 2025-02-01
publisher Institute of Navigation
record_format Article
series Navigation
spelling doaj-art-45bb855947de45e1b7bce8d9de9c79072025-08-20T02:32:44ZengInstitute of NavigationNavigation2161-42962025-02-0172110.33012/navi.683navi.683A Robust Approach to Vision-Based Terrain-Aided LocalizationDan NavonEhud RivlinHector RotsteinTerrain-aided navigation, which combines radar altitude with a digital terrain map (DTM), was developed before the era of the Global Positioning System to prevent error growth resulting from inertial navigation. Recently, cameras and substantial computational power have become ubiquitous in flying platforms, prompting interest in studying whether the radar altimeter can be replaced by a visual sensor. This paper presents a novel approach to vision-based terrain-aided localization by revisiting the correspondence and DTM (C-DTM) problem. We demonstrate that we can simplify the C-DTM problem by dividing it into a structure-from-motion (SFM) problem and then anchoring the solution to the terrain. The SFM problem can be solved using existing techniques such as feature detection, matching, and triangulation wrapped with a bundle adjustment algorithm. Anchoring is achieved by matching the point cloud to the terrain using ray-tracing and a variation of the iterative closest point method. One of the advantages of this two-step approach is that an innovative outlier filtering scheme can be included between the two stages to enhance overall robustness. The resulting algorithm consistently demonstrated high precision and statistical independence in the presence of initial errors across various simulations. The impact of different filtering methods was also studied, showing an improvement of 50% compared with the unfiltered case. The new algorithm has the potential to improve localization in real-world scenarios, making it a suitable candidate for pairing with an inertial navigation system and a Kalman f ilter to construct a comprehensive navigation system.https://navi.ion.org/content/72/1/navi.683
spellingShingle Dan Navon
Ehud Rivlin
Hector Rotstein
A Robust Approach to Vision-Based Terrain-Aided Localization
Navigation
title A Robust Approach to Vision-Based Terrain-Aided Localization
title_full A Robust Approach to Vision-Based Terrain-Aided Localization
title_fullStr A Robust Approach to Vision-Based Terrain-Aided Localization
title_full_unstemmed A Robust Approach to Vision-Based Terrain-Aided Localization
title_short A Robust Approach to Vision-Based Terrain-Aided Localization
title_sort robust approach to vision based terrain aided localization
url https://navi.ion.org/content/72/1/navi.683
work_keys_str_mv AT dannavon arobustapproachtovisionbasedterrainaidedlocalization
AT ehudrivlin arobustapproachtovisionbasedterrainaidedlocalization
AT hectorrotstein arobustapproachtovisionbasedterrainaidedlocalization
AT dannavon robustapproachtovisionbasedterrainaidedlocalization
AT ehudrivlin robustapproachtovisionbasedterrainaidedlocalization
AT hectorrotstein robustapproachtovisionbasedterrainaidedlocalization