Point rotation invariant features and attention fusion network for point cloud registration of 3D shapes

Abstract Point cloud registration of 3D shapes remains a formidable challenge in computer vision and autonomous driving. This paper introduces a novel learning-based registration method, titled Point Rotation Invariant Feature and Attention Fusion Network (PRIF), specifically tailored for point clou...

Full description

Saved in:
Bibliographic Details
Main Authors: Zeyang Liu, Zhiguo Lu, Yancong Shan
Format: Article
Language:English
Published: Nature Portfolio 2025-04-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-99240-0
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Point cloud registration of 3D shapes remains a formidable challenge in computer vision and autonomous driving. This paper introduces a novel learning-based registration method, titled Point Rotation Invariant Feature and Attention Fusion Network (PRIF), specifically tailored for point cloud registration tasks. A rapid and straightforward approach for extracting rotation-invariant information is put forward. Leveraging the strengths of the PointNet+ + structure and attention mechanism, a fresh feature extraction module for point clouds is devised, ensuring efficient feature extraction and matching. Furthermore, a novel feature fusion module is proposed for point cloud registration, facilitating the acquisition of high-quality point pair matching relationships. The network directly ingests raw point clouds and exhibits robust and precise registration capabilities for 3D shapes. The model is trained on the ModelNet40 (Wu et al. in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1912–1920, 2015) dataset and evaluated on both ModelNet40 and ShapeNet (Chang et al. in Shapenet: an information-rich 3d model repository, 2015. arXiv:1512.03012 ) datasets, demonstrating its generalization capabilities. The experimental results show that the method performs well in registration accuracy. Visualization experiments further illustrate the exceptional performance of our network in point cloud registration tasks.
ISSN:2045-2322