An efficient and lightweight detection method for stranded elastic needle defects in complex industrial environments using VEE-YOLO

Abstract Deep learning has achieved significant success in the field of defect detection; however, challenges remain in detecting small-sized, densely packed parts under complex working conditions, including occlusion and unstable lighting conditions. This paper introduces YOLOv8-n as the core netwo...

Full description

Saved in:
Bibliographic Details
Main Authors: Qiaoqiao Xiong, Qipeng Chen, Saihong Tang, Yiting Li
Format: Article
Language:English
Published: Nature Portfolio 2025-01-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-025-85721-9
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Deep learning has achieved significant success in the field of defect detection; however, challenges remain in detecting small-sized, densely packed parts under complex working conditions, including occlusion and unstable lighting conditions. This paper introduces YOLOv8-n as the core network to propose VEE-YOLO, a robust and high-performance defect detection model. Firstly, GSConv was introduced to enhance feature extraction in depthwise separable convolution and establish the VOVGSCSP module, emphasizing feature reusability for more effective feature engineering. Secondly, improvements were made to the model’s feature extraction quality by encoding inter-channel information using efficient multi-Scale attention to consider channel importance. Precise integration of spatial structural and channel information further enhanced the model’s overall feature extraction capability. Finally, EIoU Loss replaced CIoU Loss to address bounding box aspect ratio variability and sample imbalance challenges, significantly improving overall detection task performance. The algorithm’s performance was evaluated using a dataset to detect stranded elastic needle defects. The experimental results indicate that the enhanced VEE-YOLO model’s size decreased from 6.096 M to 5.486 M, while the detection speed increased from 179FPS to 244FPS, achieving a mAP of 0.926. Remarkable advancements across multiple metrics make it well-suited for deploying deep detection models in complex industrial environments.
ISSN:2045-2322