Point rotation invariant features and attention fusion network for point cloud registration of 3D shapes

Abstract Point cloud registration of 3D shapes remains a formidable challenge in computer vision and autonomous driving. This paper introduces a novel learning-based registration method, titled Point Rotation Invariant Feature and Attention Fusion Network (PRIF), specifically tailored for point clou...

Full description

Saved in:
Bibliographic Details
Main Authors: Zeyang Liu, Zhiguo Lu, Yancong Shan
Format: Article
Language:English
Published: Nature Portfolio 2025-04-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-99240-0
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849314762565877760
author Zeyang Liu
Zhiguo Lu
Yancong Shan
author_facet Zeyang Liu
Zhiguo Lu
Yancong Shan
author_sort Zeyang Liu
collection DOAJ
description Abstract Point cloud registration of 3D shapes remains a formidable challenge in computer vision and autonomous driving. This paper introduces a novel learning-based registration method, titled Point Rotation Invariant Feature and Attention Fusion Network (PRIF), specifically tailored for point cloud registration tasks. A rapid and straightforward approach for extracting rotation-invariant information is put forward. Leveraging the strengths of the PointNet+ + structure and attention mechanism, a fresh feature extraction module for point clouds is devised, ensuring efficient feature extraction and matching. Furthermore, a novel feature fusion module is proposed for point cloud registration, facilitating the acquisition of high-quality point pair matching relationships. The network directly ingests raw point clouds and exhibits robust and precise registration capabilities for 3D shapes. The model is trained on the ModelNet40 (Wu et al. in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1912–1920, 2015) dataset and evaluated on both ModelNet40 and ShapeNet (Chang et al. in Shapenet: an information-rich 3d model repository, 2015. arXiv:1512.03012 ) datasets, demonstrating its generalization capabilities. The experimental results show that the method performs well in registration accuracy. Visualization experiments further illustrate the exceptional performance of our network in point cloud registration tasks.
format Article
id doaj-art-5906de2d6f4f43d191f0697459c115a6
institution Kabale University
issn 2045-2322
language English
publishDate 2025-04-01
publisher Nature Portfolio
record_format Article
series Scientific Reports
spelling doaj-art-5906de2d6f4f43d191f0697459c115a62025-08-20T03:52:20ZengNature PortfolioScientific Reports2045-23222025-04-0115111610.1038/s41598-025-99240-0Point rotation invariant features and attention fusion network for point cloud registration of 3D shapesZeyang Liu0Zhiguo Lu1Yancong Shan2Department of Mechanical Engineering and Automation, Northeastern UniversityDepartment of Mechanical Engineering and Automation, Northeastern UniversityDepartment of Mechanical Engineering and Automation, Northeastern UniversityAbstract Point cloud registration of 3D shapes remains a formidable challenge in computer vision and autonomous driving. This paper introduces a novel learning-based registration method, titled Point Rotation Invariant Feature and Attention Fusion Network (PRIF), specifically tailored for point cloud registration tasks. A rapid and straightforward approach for extracting rotation-invariant information is put forward. Leveraging the strengths of the PointNet+ + structure and attention mechanism, a fresh feature extraction module for point clouds is devised, ensuring efficient feature extraction and matching. Furthermore, a novel feature fusion module is proposed for point cloud registration, facilitating the acquisition of high-quality point pair matching relationships. The network directly ingests raw point clouds and exhibits robust and precise registration capabilities for 3D shapes. The model is trained on the ModelNet40 (Wu et al. in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1912–1920, 2015) dataset and evaluated on both ModelNet40 and ShapeNet (Chang et al. in Shapenet: an information-rich 3d model repository, 2015. arXiv:1512.03012 ) datasets, demonstrating its generalization capabilities. The experimental results show that the method performs well in registration accuracy. Visualization experiments further illustrate the exceptional performance of our network in point cloud registration tasks.https://doi.org/10.1038/s41598-025-99240-0Neural networkPoint cloud registrationFeature extraction
spellingShingle Zeyang Liu
Zhiguo Lu
Yancong Shan
Point rotation invariant features and attention fusion network for point cloud registration of 3D shapes
Scientific Reports
Neural network
Point cloud registration
Feature extraction
title Point rotation invariant features and attention fusion network for point cloud registration of 3D shapes
title_full Point rotation invariant features and attention fusion network for point cloud registration of 3D shapes
title_fullStr Point rotation invariant features and attention fusion network for point cloud registration of 3D shapes
title_full_unstemmed Point rotation invariant features and attention fusion network for point cloud registration of 3D shapes
title_short Point rotation invariant features and attention fusion network for point cloud registration of 3D shapes
title_sort point rotation invariant features and attention fusion network for point cloud registration of 3d shapes
topic Neural network
Point cloud registration
Feature extraction
url https://doi.org/10.1038/s41598-025-99240-0
work_keys_str_mv AT zeyangliu pointrotationinvariantfeaturesandattentionfusionnetworkforpointcloudregistrationof3dshapes
AT zhiguolu pointrotationinvariantfeaturesandattentionfusionnetworkforpointcloudregistrationof3dshapes
AT yancongshan pointrotationinvariantfeaturesandattentionfusionnetworkforpointcloudregistrationof3dshapes