Exploring the Limitations of Federated Learning: A Novel Wasserstein Metric-Based Poisoning Attack on Traffic Sign Classification
Federated Learning (FL) enhances privacy but remains vulnerable to model poisoning attacks, where an adversary manipulates client models to upload <italic>poisoned</italic> updates during training, thereby compromising the overall FL model. Existing attack models often assume adversaries...
Saved in:
| Main Authors: | Suzan Almutairi, Ahmed Barnawi |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11062639/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
A comprehensive analysis of model poisoning attacks in federated learning for autonomous vehicles: A benchmark study
by: Suzan Almutairi, et al.
Published: (2024-12-01) -
Securing federated learning: a defense strategy against targeted data poisoning attack
by: Ansam Khraisat, et al.
Published: (2025-02-01) -
A Federated Weighted Learning Algorithm Against Poisoning Attacks
by: Yafei Ning, et al.
Published: (2025-04-01) -
An Optimal Two-Step Approach for Defense Against Poisoning Attacks in Federated Learning
by: Yasir Ali, et al.
Published: (2025-01-01) -
A Verifiable, Privacy-Preserving, and Poisoning Attack-Resilient Federated Learning Framework
by: Washington Enyinna Mbonu, et al.
Published: (2025-03-01)