MHFS-FORMER: Multiple-Scale Hybrid Features Transformer for Lane Detection

Although deep learning has exhibited remarkable performance in lane detection, lane detection remains challenging in complex scenarios, including those with damaged lane markings, obstructions, and insufficient lighting. Furthermore, a significant drawback of most existing lane-detection algorithms...

Full description

Saved in:
Bibliographic Details
Main Authors: Dongqi Yan, Tao Zhang
Format: Article
Language:English
Published: MDPI AG 2025-05-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/25/9/2876
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Although deep learning has exhibited remarkable performance in lane detection, lane detection remains challenging in complex scenarios, including those with damaged lane markings, obstructions, and insufficient lighting. Furthermore, a significant drawback of most existing lane-detection algorithms lies in their reliance on complex post-processing and strong prior knowledge. Inspired by the DETR architecture, we propose an end-to-end Transformer-based model, MHFS-FORMER, to resolve these issues. To tackle the interference with lane detection in complex scenarios, we have designed MHFNet. It fuses multi-scale features with the Transformer Encoder to obtain enhanced multi-scale features. These enhanced multi-scale features are then fed into the Transformer Decoder. A novel multi-reference deformable attention module is introduced to disperse the attention around the objects to enhance the model’s representation ability during the training process and better capture the elongated structure of lanes and the global environment. We also designed ShuffleLaneNet, which meticulously explores the channel and spatial information of multi-scale lane features, significantly improving the accuracy of target recognition. Our method has achieved an accuracy score of 96.88%, a real-time processing speed of 87 fps on the TuSimple dataset, and an F1 score of 77.38% on the CULane dataset. Compared with the methods based on CNN and those based on Transformer, our method has demonstrated excellent performance.
ISSN:1424-8220