LEAD-YOLO: A Lightweight and Accurate Network for Small Object Detection in Autonomous Driving

The accurate detection of small objects remains a critical challenge in autonomous driving systems, where improving detection performance typically comes at the cost of increased model complexity, conflicting with the lightweight requirements of edge deployment. To address this dilemma, this paper p...

Full description

Saved in:
Bibliographic Details
Main Authors: Yunchuan Yang, Shubin Yang, Qiqing Chan
Format: Article
Language:English
Published: MDPI AG 2025-08-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/25/15/4800
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849406372106469376
author Yunchuan Yang
Shubin Yang
Qiqing Chan
author_facet Yunchuan Yang
Shubin Yang
Qiqing Chan
author_sort Yunchuan Yang
collection DOAJ
description The accurate detection of small objects remains a critical challenge in autonomous driving systems, where improving detection performance typically comes at the cost of increased model complexity, conflicting with the lightweight requirements of edge deployment. To address this dilemma, this paper proposes LEAD-YOLO (Lightweight Efficient Autonomous Driving YOLO), an enhanced network architecture based on YOLOv11n that achieves superior small object detection while maintaining computational efficiency. The proposed framework incorporates three innovative components: First, the Backbone integrates a lightweight Convolutional Gated Transformer (CGF) module, which employs normalized gating mechanisms with residual connections, and a Dilated Feature Fusion (DFF) structure that enables progressive multi-scale context modeling through dilated convolutions. These components synergistically enhance small object perception and environmental context understanding without compromising network efficiency. Second, the neck features a hierarchical feature fusion module (HFFM) that establishes guided feature aggregation paths through hierarchical structuring, facilitating collaborative modeling between local structural information and global semantics for robust multi-scale object detection in complex traffic scenarios. Third, the head implements a shared feature detection head (SFDH) structure, incorporating shared convolution modules for efficient cross-scale feature sharing and detail enhancement branches for improved texture and edge modeling. Extensive experiments validate the effectiveness of LEAD-YOLO: on the nuImages dataset, the method achieves 3.8% and 5.4% improvements in mAP@0.5 and mAP@[0.5:0.95], respectively, while reducing parameters by 24.1%. On the VisDrone2019 dataset, performance gains reach 7.9% and 6.4% for corresponding metrics. These findings demonstrate that LEAD-YOLO achieves an excellent balance between detection accuracy and model efficiency, thereby showcasing substantial potential for applications in autonomous driving.
format Article
id doaj-art-399e9ec8b3464e148cd98d2288c2d744
institution Kabale University
issn 1424-8220
language English
publishDate 2025-08-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj-art-399e9ec8b3464e148cd98d2288c2d7442025-08-20T03:36:23ZengMDPI AGSensors1424-82202025-08-012515480010.3390/s25154800LEAD-YOLO: A Lightweight and Accurate Network for Small Object Detection in Autonomous DrivingYunchuan Yang0Shubin Yang1Qiqing Chan2School of Electrical and Information Engineering, Wuhan Institute of Technology, Wuhan 430205, ChinaSchool of Electrical and Information Engineering, Wuhan Institute of Technology, Wuhan 430205, ChinaSchool of Electrical and Information Engineering, Wuhan Institute of Technology, Wuhan 430205, ChinaThe accurate detection of small objects remains a critical challenge in autonomous driving systems, where improving detection performance typically comes at the cost of increased model complexity, conflicting with the lightweight requirements of edge deployment. To address this dilemma, this paper proposes LEAD-YOLO (Lightweight Efficient Autonomous Driving YOLO), an enhanced network architecture based on YOLOv11n that achieves superior small object detection while maintaining computational efficiency. The proposed framework incorporates three innovative components: First, the Backbone integrates a lightweight Convolutional Gated Transformer (CGF) module, which employs normalized gating mechanisms with residual connections, and a Dilated Feature Fusion (DFF) structure that enables progressive multi-scale context modeling through dilated convolutions. These components synergistically enhance small object perception and environmental context understanding without compromising network efficiency. Second, the neck features a hierarchical feature fusion module (HFFM) that establishes guided feature aggregation paths through hierarchical structuring, facilitating collaborative modeling between local structural information and global semantics for robust multi-scale object detection in complex traffic scenarios. Third, the head implements a shared feature detection head (SFDH) structure, incorporating shared convolution modules for efficient cross-scale feature sharing and detail enhancement branches for improved texture and edge modeling. Extensive experiments validate the effectiveness of LEAD-YOLO: on the nuImages dataset, the method achieves 3.8% and 5.4% improvements in mAP@0.5 and mAP@[0.5:0.95], respectively, while reducing parameters by 24.1%. On the VisDrone2019 dataset, performance gains reach 7.9% and 6.4% for corresponding metrics. These findings demonstrate that LEAD-YOLO achieves an excellent balance between detection accuracy and model efficiency, thereby showcasing substantial potential for applications in autonomous driving.https://www.mdpi.com/1424-8220/25/15/4800autonomous drivingobject detectionYOLOv11nsmall objectlightweight
spellingShingle Yunchuan Yang
Shubin Yang
Qiqing Chan
LEAD-YOLO: A Lightweight and Accurate Network for Small Object Detection in Autonomous Driving
Sensors
autonomous driving
object detection
YOLOv11n
small object
lightweight
title LEAD-YOLO: A Lightweight and Accurate Network for Small Object Detection in Autonomous Driving
title_full LEAD-YOLO: A Lightweight and Accurate Network for Small Object Detection in Autonomous Driving
title_fullStr LEAD-YOLO: A Lightweight and Accurate Network for Small Object Detection in Autonomous Driving
title_full_unstemmed LEAD-YOLO: A Lightweight and Accurate Network for Small Object Detection in Autonomous Driving
title_short LEAD-YOLO: A Lightweight and Accurate Network for Small Object Detection in Autonomous Driving
title_sort lead yolo a lightweight and accurate network for small object detection in autonomous driving
topic autonomous driving
object detection
YOLOv11n
small object
lightweight
url https://www.mdpi.com/1424-8220/25/15/4800
work_keys_str_mv AT yunchuanyang leadyoloalightweightandaccuratenetworkforsmallobjectdetectioninautonomousdriving
AT shubinyang leadyoloalightweightandaccuratenetworkforsmallobjectdetectioninautonomousdriving
AT qiqingchan leadyoloalightweightandaccuratenetworkforsmallobjectdetectioninautonomousdriving