A Robust YOLOv5 Model with SE Attention and BIFPN for Jishan Jujube Detection in Complex Agricultural Environments

This study presents an improved detection model based on the YOLOv5 (You Only Look Once version 5) framework to enhance the accuracy of Jishan jujube detection in complex natural environments, particularly with varying degrees of occlusion and dense foliage. To improve detection performance, we inte...

Full description

Saved in:
Bibliographic Details
Main Authors: Hao Chen, Lijun Su, Yiren Tian, Yixin Chai, Gang Hu, Weiyi Mu
Format: Article
Language:English
Published: MDPI AG 2025-03-01
Series:Agriculture
Subjects:
Online Access:https://www.mdpi.com/2077-0472/15/6/665
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This study presents an improved detection model based on the YOLOv5 (You Only Look Once version 5) framework to enhance the accuracy of Jishan jujube detection in complex natural environments, particularly with varying degrees of occlusion and dense foliage. To improve detection performance, we integrate an SE (squeeze-and-excitation) attention module into the backbone network to enhance the model’s ability to focus on target objects while suppressing background noise. Additionally, the original neck network is replaced with a BIFPN (bi-directional feature pyramid network) structure, enabling efficient multiscale feature fusion and improving the extraction of critical features, especially for small and occluded fruits. The experimental results demonstrate that the improved YOLOv5 model achieves a mean average precision (mAP) of 96.5%, outperforming the YOLOv3, YOLOv4, YOLOv5, and SSD (Single-Shot Multibox Detector) models by 7.4%, 9.9%, 2.5%, and 0.8%, respectively. Furthermore, the proposed model improves precision (95.8%) and F1 score (92.4%), reducing false positives and achieving a better balance between precision and recall. These results highlight the model’s effectiveness in addressing missed detections of small and occluded fruits while maintaining higher confidence in predictions.
ISSN:2077-0472