Combining Camera–LiDAR Fusion and Motion Planning Using Bird’s-Eye View Representation for End-to-End Autonomous Driving
End-to-end autonomous driving has become a key research focus in autonomous vehicles. However, existing methods struggle with effectively fusing heterogeneous sensor inputs and converting dense perceptual features into sparse motion representations. To address these challenges, we propose BevDrive,...
Saved in:
| Main Authors: | Ze Yu, Jun Li, Yuzhen Wei, Yuandong Lyu, Xiaojun Tan |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-04-01
|
| Series: | Drones |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2504-446X/9/4/281 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Polynomial and Differential Networks for End-to-End Autonomous Driving
by: Youngseong Cho, et al.
Published: (2025-01-01) -
End-to-End Online Vectorized Map Construction With Confidence Estimates
by: Bo Huang, et al.
Published: (2025-01-01) -
LiGenCam: Reconstruction of Color Camera Images from Multimodal LiDAR Data for Autonomous Driving
by: Minghao Xu, et al.
Published: (2025-07-01) -
CL-fusionBEV: 3D object detection method with camera-LiDAR fusion in Bird’s Eye View
by: Peicheng Shi, et al.
Published: (2024-07-01) -
Attention-Based LiDAR–Camera Fusion for 3D Object Detection in Autonomous Driving
by: Zhibo Wang, et al.
Published: (2025-05-01)