URT-YOLOv11: A Large Receptive Field Algorithm for Detecting Tomato Ripening Under Different Field Conditions
This study proposes an improved YOLOv11 model to address the limitations of traditional tomato recognition algorithms in complex agricultural environments, such as lighting changes, occlusion, scale variations, and complex backgrounds. These factors often hinder accurate feature extraction, leading...
Saved in:
| Main Authors: | , , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-05-01
|
| Series: | Agriculture |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2077-0472/15/10/1060 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | This study proposes an improved YOLOv11 model to address the limitations of traditional tomato recognition algorithms in complex agricultural environments, such as lighting changes, occlusion, scale variations, and complex backgrounds. These factors often hinder accurate feature extraction, leading to recognition errors and reduced computational efficiency. To overcome these challenges, the model integrates several architectural enhancements. First, the UniRepLKNet block replaces the C3k2 module in the standard network, improving computational efficiency, expanding the receptive field, and enhancing multi-scale target recognition. Second, the RFCBAMConv module in the neck integrates channel and spatial attention mechanisms, boosting small-object detection and robustness under varying lighting conditions. Finally, the TADDH module optimizes the detection head by balancing classification and regression tasks through task alignment strategies, further improving detection accuracy across different target scales. Ablation experiments confirm the contribution of each module to overall performance improvement. Our experimental results demonstrate that the proposed model exhibits enhanced stability under special conditions, such as similar backgrounds, lighting variations, and object occlusion, while significantly improving both accuracy and computational efficiency. The model achieves an accuracy of 85.4%, recall of 80.3%, and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>m</mi><mi>A</mi><mi>P</mi><mo>@</mo><mn>50</mn></mrow></semantics></math></inline-formula> of 87.3%. Compared to the baseline YOLOv11, the improved model increases <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>m</mi><mi>A</mi><mi>P</mi><mo>@</mo><mn>50</mn></mrow></semantics></math></inline-formula> by 2.2% while reducing parameters to 2.16 M, making it well-suited for real-time applications in resource-constrained environments. This study provides an efficient and practical solution for intelligent agriculture, enhancing real-time tomato detection and laying a solid foundation for future crop monitoring systems. |
|---|---|
| ISSN: | 2077-0472 |