PTFNet: Robotic-Relevant, Single-View Obstacle Footprint Estimation From Sparse and Incomplete Point Clouds

In order for robots to navigate successfully, they need to correctly estimate the traversability of their surroundings. To do so, an orthogonal projection of the spatial obstacles perceived by the robot’s sensors is usually utilized. As the sensors can only see the surfaces closest to the...

Full description

Saved in:
Bibliographic Details
Main Authors: Konrad P. Cop, Tomasz P. Trzcinski
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10971182/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In order for robots to navigate successfully, they need to correctly estimate the traversability of their surroundings. To do so, an orthogonal projection of the spatial obstacles perceived by the robot’s sensors is usually utilized. As the sensors can only see the surfaces closest to them, it is difficult to judge the traversability without guessing the shape of the obstacle. In this work, we introduce a novel approach that can estimate the obstacle’s 2D footprint straight from incomplete 3D data in the form of a point cloud. Unlike existing point cloud completion methods, we formulate the problem such that our approach does not require the reconstruction of the entire 3D object and subsequent projection. Instead, it focuses on rendering a 2D representation directly from segmented sensor scans, even if the available points are very sparse. At its core, we propose a lightweight, multi-modal autoencoder that takes an input of a voxelized incomplete point cloud and outputs an estimated footprint that is directly applicable to the occupancy grid. In the absence of similar methods, we validate the proposal on a real dataset coined UR, which was collected specifically for this publication, and prove the method’s applicability in real-life scenarios. The system achieves good performance even on point clouds with as few as 30 points. Additionally, we utilize an open-source synthetic dataset to compare our method, in an indirect fashion, with available point cloud completion algorithms by projecting their outputs on a 2D plane. Compared to other pipelines, our method proves superior in terms of computational resource consumption in complete robotic pipeline tests and achieves satisfactory accuracy on various test objects. We provide collected UR data to the community under the Repository.
ISSN:2169-3536