Comparative analysis of attentional mechanisms in rice pest identification

Abstract Accurate detection of rice pests helps farmers take timely control measures. This study compares different attention mechanisms for rice pest detection in complex backgrounds and demonstrates that a human vision-inspired Bionic Attention (BA) mechanism outperforms most traditional attention...

Full description

Saved in:
Bibliographic Details
Main Authors: Yongjun Xiao, Xiangruo Zhang, Ziao Chen, Jingxuan Tan, Linyu Zhou, Chunxian Jiang, Lijia Xu, Zhiyong Li
Format: Article
Language:English
Published: Nature Portfolio 2025-07-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-08869-4
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Accurate detection of rice pests helps farmers take timely control measures. This study compares different attention mechanisms for rice pest detection in complex backgrounds and demonstrates that a human vision-inspired Bionic Attention (BA) mechanism outperforms most traditional attention mechanisms in this task and is applicable to all major mainstream and novel models. Bionic Attention (BA) assists the main branch in recognition by additionally labeling important features of each rice pest category and inputting the additional category labels as bionic information into the network during the input stage. This study applies Bionic Attention to dominant entity classical and novel networks, including YOLOv5s, YOLOv8n, SSD, Faster R-CNN, YOLOv9-e, and YOLOv10-X, and compares it with classical attention mechanisms such as CBAM, SE, and SimAM to verify its feasibility. Meanwhile, this study introduces more detailed evaluation metrics to assess Bionic Attention, including Classification Error, Localization Error, Cls and Loc Error, Duplicate Detection Error, Background Error, and Missed GT Error. Experimental results show that Bionic Attention improves detection performance by indirectly enhancing the loss function, allowing the model to acquire more fine-grained information during the feature extraction stage, thereby improving detection accuracy.
ISSN:2045-2322