SLPOD: superclass learning on point cloud object detection

Abstract In the realm of point cloud object detection, classification tasks emphasize extracting common features to enhance generalization, often at the expense of individual-specific features. This limitation becomes particularly evident when handling intricate datasets like KITTI. Traditional mode...

Full description

Saved in:
Bibliographic Details
Main Authors: Xiaokang Yang, Kai Zhang, Yangyue Feng, Beibei Su, Yiming Cai, Kaibo Zhang, Zhiheng Zhang
Format: Article
Language:English
Published: Springer 2025-03-01
Series:Complex & Intelligent Systems
Subjects:
Online Access:https://doi.org/10.1007/s40747-025-01781-4
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract In the realm of point cloud object detection, classification tasks emphasize extracting common features to enhance generalization, often at the expense of individual-specific features. This limitation becomes particularly evident when handling intricate datasets like KITTI. Traditional models struggle to adequately capture individual-specific features, resulting in a scattered distribution of samples within the feature space and compromising the precision of object bounding boxes. To tackle this challenge, we introduce SLPOD, a Superclass-based point cloud object detection algorithm. Employing a siamese network structure, SLPOD conducts unsupervised clustering of samples within the same category to enhance the extraction of individual-specific features, thereby improving detection accuracy when confronted with complex datasets. Additionally, our approach integrates strategies such as voxel and point cloud feature fusion, global feature acquisition, and dynamic adjustment of sampling rates based on point sparsity, further enhancing the network’s capability to extract features. Experimental results demonstrate that SLPOD outperforms baseline algorithms in mean Average Precision on both KITTI and Waymo datasets, exhibiting robustness across diverse scenarios.
ISSN:2199-4536
2198-6053