Vision-Based Branch Road Detection for Intersection Navigation in Unstructured Environment Using Multi-Task Network

Autonomous vehicles need a driving method to be less dependent on localization data to navigate intersections in unstructured environments because these data may not be accurate in such environments. Methods of distinguishing branch roads existing at intersections using vision and applying them to i...

Full description

Saved in:
Bibliographic Details
Main Authors: Joonwoo Ahn, Yangwoo Lee, Minsoo Kim, Jaeheung Park
Format: Article
Language:English
Published: Wiley 2022-01-01
Series:Journal of Advanced Transportation
Online Access:http://dx.doi.org/10.1155/2022/9328398
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Autonomous vehicles need a driving method to be less dependent on localization data to navigate intersections in unstructured environments because these data may not be accurate in such environments. Methods of distinguishing branch roads existing at intersections using vision and applying them to intersection navigation have been studied. Model-based detection methods recognize patterns of the branch roads, but are sensitive to sensor noise and difficult to apply to various complex situations. Therefore, this study proposes a method for detecting the branch roads at the intersection using deep learning. The proposed multi-task deep neural network distinguishes the branch road into a shape of rotated bounding boxes and also recognizes the drivable area to consider obstacles inside the box. Through the output of the network, an occupancy grid map consisting of one branch road at an intersection is obtained, which can be used as an input to the existing motion-planning algorithms that do not consider intersections. Experiments in real environments show that the proposed method detected the branch roads more accurately than the model-based detection method, and the vehicle drove safely at the intersection.
ISSN:2042-3195