Effective defense against physically embedded backdoor attacks via clustering-based filtering
Abstract Backdoor attacks pose a severe threat to the integrity of machine learning models, especially in real-world image classification tasks. In such attacks, adversaries embed malicious behaviors triggered by specific patterns in the training data, causing models to misclassify whenever the trig...
Saved in:
| Main Author: | Mohammed Kutbi |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Springer
2025-04-01
|
| Series: | Complex & Intelligent Systems |
| Subjects: | |
| Online Access: | https://doi.org/10.1007/s40747-025-01876-y |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
A Backdoor Approach With Inverted Labels Using Dirty Label-Flipping Attacks
by: Orson Mengara
Published: (2025-01-01) -
FLARE: A Backdoor Attack to Federated Learning with Refined Evasion
by: Qingya Wang, et al.
Published: (2024-11-01) -
A Backdoor Attack Against LSTM-Based Text Classification Systems
by: Jiazhu Dai, et al.
Published: (2019-01-01) -
A4FL: Federated Adversarial Defense via Adversarial Training and Pruning Against Backdoor Attack
by: Saeed-Uz-Zaman, et al.
Published: (2025-01-01) -
CLB-Defense: based on contrastive learning defense for graph neural network against backdoor attack
by: Jinyin CHEN, et al.
Published: (2023-04-01)