An Optimal Two-Step Approach for Defense Against Poisoning Attacks in Federated Learning
Federated learning (FL) has gained widespread adoption for training artificial intelligence (AI) models while ensuring the confidentiality of client data. However, this privacy-preserving nature of FL also makes it vulnerable to poisoning attacks. To counter these attacks, several defense methods ha...
Saved in:
| Main Authors: | Yasir Ali, Kyung Hyun Han, Abdul Majeed, Joon S. Lim, Seong Oun Hwang |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10946885/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Securing federated learning: a defense strategy against targeted data poisoning attack
by: Ansam Khraisat, et al.
Published: (2025-02-01) -
Exploring the Limitations of Federated Learning: A Novel Wasserstein Metric-Based Poisoning Attack on Traffic Sign Classification
by: Suzan Almutairi, et al.
Published: (2025-01-01) -
AIDFL: An Information-Driven Anomaly Detector for Data Poisoning in Decentralized Federated Learning
by: Xiao Chen, et al.
Published: (2025-01-01) -
A Federated Weighted Learning Algorithm Against Poisoning Attacks
by: Yafei Ning, et al.
Published: (2025-04-01) -
Reducing Defense Vulnerabilities in Federated Learning: A Neuron-Centric Approach
by: Eda Sena Erdol, et al.
Published: (2025-05-01)