A LiDAR - camera fusion detection method based on weight allocation

In automatic driving target detection problem, the neural network is applied to two methods, vision, and LiDAR. These two methods have some relatively mature models based on neural networks. Combining the two to complement each other has become a hot topic. At present, most autonomous driving sensor...

Full description

Saved in:
Bibliographic Details
Main Authors: Kang Haotian, Wang Tianshu
Format: Article
Language:English
Published: EDP Sciences 2025-01-01
Series:ITM Web of Conferences
Online Access:https://www.itm-conferences.org/articles/itmconf/pdf/2025/01/itmconf_dai2024_01009.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In automatic driving target detection problem, the neural network is applied to two methods, vision, and LiDAR. These two methods have some relatively mature models based on neural networks. Combining the two to complement each other has become a hot topic. At present, most autonomous driving sensor fusion methods focus on fusion strategy and feature alignment, and there are few studies on the weight ratio of the two sensors after fusion in different environments. In this paper, a fusion target detection model of camera and LiDAR is proposed based on the weighted weight allocation method. The weighted fusion method is adopted, image feature points are extracted by Fast RCNN, and then LiDAR point cloud data is fused into the model by the weighted method, environment variables are introduced, and different weight allocation methods are output under different environments through full connection layer preprocessing. The results on the Nuscenes dataset show that compared with the results without weight assignment, the model can effectively achieve targeted weight assignment in different situations, and the performance is due to the single-sensor method.
ISSN:2271-2097