Backdoor Attack Against Dataset Distillation in Natural Language Processing
Dataset distillation has become an important technique for enhancing the efficiency of data when training machine learning models. It finds extensive applications across various fields, including computer vision (CV) and natural language processing (NLP). However, it essentially consists of a deep n...
Saved in:
| Main Authors: | Yuhao Chen, Weida Xu, Sicong Zhang, Yang Xu |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2024-12-01
|
| Series: | Applied Sciences |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2076-3417/14/23/11425 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
BDEKD: mitigating backdoor attacks in NLP models via ensemble knowledge distillation
by: Zijie Zhang, et al.
Published: (2025-07-01) -
A survey of backdoor attacks and defences: From deep neural networks to large language models
by: Ling-Xin Jin, et al.
Published: (2025-09-01) -
Multi-Targeted Textual Backdoor Attack: Model-Specific Misrecognition via Trigger Position and Word Choice
by: Taehwa Lee, et al.
Published: (2025-01-01) -
Clean-label backdoor attack on link prediction task
by: Junming Mo, et al.
Published: (2025-08-01) -
Text Select-Backdoor: Selective Backdoor Attack for Text Recognition Systems
by: Hyun Kwon, et al.
Published: (2024-01-01)