A4FL: Federated Adversarial Defense via Adversarial Training and Pruning Against Backdoor Attack
Backdoor attacks threaten federated learning (FL) models, where malicious participants embed hidden triggers into local models during training. These triggers can compromise crucial applications, such as autonomous systems, when they activate specific inputs, causing a targeted misclassification in...
Saved in:
| Main Authors: | Saeed-Uz-Zaman, Bin Li, Muhammad Hamid, Muhammad Saleem, Mohammed Aman |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10992684/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
A Backdoor Approach With Inverted Labels Using Dirty Label-Flipping Attacks
by: Orson Mengara
Published: (2025-01-01) -
Adversarial Training for Mitigating Insider-Driven XAI-Based Backdoor Attacks
by: R. G. Gayathri, et al.
Published: (2025-05-01) -
Backdoor defense method in federated learning based on contrastive training
by: Jiale ZHANG, et al.
Published: (2024-03-01) -
A survey of backdoor attacks and defences: From deep neural networks to large language models
by: Ling-Xin Jin, et al.
Published: (2025-09-01) -
Efficient Method for Robust Backdoor Detection and Removal in Feature Space Using Clean Data
by: Donik Vrsnak, et al.
Published: (2025-01-01)