C2L3-Fusion: An Integrated 3D Object Detection Method for Autonomous Vehicles
Accurate 3D object detection is crucial for autonomous vehicles (AVs) to navigate safely in complex environments. This paper introduces a novel fusion framework that integrates Camera image-based <b>2D object detection using YOLOv8</b> and LiDAR data-based <b>3D object detection us...
Saved in:
| Main Authors: | , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-04-01
|
| Series: | Sensors |
| Subjects: | |
| Online Access: | https://www.mdpi.com/1424-8220/25/9/2688 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Accurate 3D object detection is crucial for autonomous vehicles (AVs) to navigate safely in complex environments. This paper introduces a novel fusion framework that integrates Camera image-based <b>2D object detection using YOLOv8</b> and LiDAR data-based <b>3D object detection using PointPillars, hence named C2L3-Fusion</b>. Unlike conventional fusion approaches, which often struggle with feature misalignment, <b>C2L3-Fusion</b> enhances spatial consistency and multi-level feature aggregation, significantly improving detection accuracy. Our method outperforms state-of-the-art approaches such as YoPi-CLOCs Fusion Network, standalone YOLOv8, and standalone PointPillars, achieving mean Average Precision (mAP) scores of <b>89.91% (easy), 79.26% (moderate), and 78.01% (hard)</b> on the KITTI dataset. Successfully implemented on the Nvidia Jetson AGX Xavier embedded platform, <b>C2L3-Fusion</b> maintains real-time performance while enhancing robustness, making it highly suitable for self-driving vehicles. This paper details the methodology, mathematical formulations, algorithmic advancements, and real-world testing of C2L3-Fusion, offering a comprehensive solution for 3D object detection in autonomous navigation. |
|---|---|
| ISSN: | 1424-8220 |