OmniOcc: Cylindrical Voxel-Based Semantic Occupancy Prediction for Omnidirectional Vision Systems

Accurate 3D perception is essential for autonomous driving. Traditional methods often struggle with geometric ambiguity due to a lack of geometric prior. To address these challenges, we use omnidirectional depth estimation to introduce geometric prior. Based on the depth information, we propose a Sk...

Full description

Saved in:
Bibliographic Details
Main Authors: Chaofan Wu, Jiaheng Li, Jinghao Cao, Ming Li, Sidan Du, Yang Li
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11113275/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Accurate 3D perception is essential for autonomous driving. Traditional methods often struggle with geometric ambiguity due to a lack of geometric prior. To address these challenges, we use omnidirectional depth estimation to introduce geometric prior. Based on the depth information, we propose a Sketch-Coloring framework OmniOcc. Additionally, our approach introduces a cylindrical voxel representation based on polar coordinate to better align with the radial nature of panoramic camera views. To address the lack of fisheye camera dataset in autonomous driving tasks, we also build a virtual scene dataset with six fisheye cameras. Experimental results demonstrate that our Sketch-Coloring network significantly enhances 3D perception performance.
ISSN:2169-3536