Mitigating Catastrophic Forgetting in Pest Detection Through Adaptive Response Distillation

Pest detection in agriculture faces the challenge of adapting to new pest species while preserving the ability to recognize previously learned ones. Traditional model fine-tuning approaches often result in catastrophic forgetting, where the acquisition of new classes significantly impairs the recogn...

Full description

Saved in:
Bibliographic Details
Main Authors: Hongjun Zhang, Zhendong Yin, Dasen Li, Yanlong Zhao
Format: Article
Language:English
Published: MDPI AG 2025-05-01
Series:Agriculture
Subjects:
Online Access:https://www.mdpi.com/2077-0472/15/9/1006
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Pest detection in agriculture faces the challenge of adapting to new pest species while preserving the ability to recognize previously learned ones. Traditional model fine-tuning approaches often result in catastrophic forgetting, where the acquisition of new classes significantly impairs the recognition performance of existing ones. Although knowledge distillation has been shown to effectively mitigate catastrophic forgetting, current research predominantly focuses on feature imitation, neglecting the extraction of potentially valuable information from responses. To address this issue, we introduce a response-based distillation method, called adaptive response distillation (ARD). ARD incorporates an adaptive response filtering strategy that dynamically adjusts the weights of classification and regression responses based on the significance of the information. This approach selectively filters and transfers valuable response data, ensuring efficient propagation of category and localization information. Our method effectively reduces catastrophic forgetting during incremental learning, enabling the student detector to maintain memory of old classes while assimilating new pest categories. Experimental evaluations on the large-scale IP102 pest dataset demonstrate that the proposed ARD method consistently outperforms existing state-of-the-art algorithms across various class-incremental learning scenarios, significantly narrowing the performance gap compared to fully trained models.
ISSN:2077-0472