BDEKD: mitigating backdoor attacks in NLP models via ensemble knowledge distillation

Abstract Backdoor attacks present significant risks to the security of deep neural networks (DNNs) in NLP domain, as the attackers can covertly manipulate the model’s output behavior either by poisoning the training data or tampering model’s training process. This paper introduces a novel backdoor d...

Full description

Saved in:
Bibliographic Details
Main Authors: Zijie Zhang, Xinyuan Miao, Chenyu Zhou, Chenming Shang, Xi Chen, Xianglong Kong, Wei Huang, Yi Cao
Format: Article
Language:English
Published: Springer 2025-07-01
Series:Complex & Intelligent Systems
Subjects:
Online Access:https://doi.org/10.1007/s40747-025-02006-4
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Backdoor attacks present significant risks to the security of deep neural networks (DNNs) in NLP domain, as the attackers can covertly manipulate the model’s output behavior either by poisoning the training data or tampering model’s training process. This paper introduces a novel backdoor defence strategy, Backdoor Defense via Ensemble Knowledge Distillation (BDEKD), to mitigate various types of backdoors in compromised DNNs. It is marked as the first utilization of ensemble methods in enhancing backdoor mitigation. The BDEKD framework only requires a minimal subset of clean data to clean the compromised model, generating several relatively heterogeneous and backdoor-cleaned teacher models. This process is followed by an enhancement of the training data through augmentation, and the implementation of an ensemble distillation technique specifically designed to mitigate the backdoor from the model. Our empirical analysis demonstrates that BDEKD effectively lowers the success rate of six sophisticated backdoor attacks to approximately 17%, while only requiring 20% of the training data. Crucially, it preserves the model’s accuracy on clean data around 85%, ensuring minimal impact on its intended functionality. Our code is available at https://github.com/quanzhuangdefujinan/BDEKD-Research/tree/BDEKD . Graphical abstract
ISSN:2199-4536
2198-6053