Exploring the Limitations of Federated Learning: A Novel Wasserstein Metric-Based Poisoning Attack on Traffic Sign Classification
Federated Learning (FL) enhances privacy but remains vulnerable to model poisoning attacks, where an adversary manipulates client models to upload <italic>poisoned</italic> updates during training, thereby compromising the overall FL model. Existing attack models often assume adversaries...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11062639/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849432341073625088 |
|---|---|
| author | Suzan Almutairi Ahmed Barnawi |
| author_facet | Suzan Almutairi Ahmed Barnawi |
| author_sort | Suzan Almutairi |
| collection | DOAJ |
| description | Federated Learning (FL) enhances privacy but remains vulnerable to model poisoning attacks, where an adversary manipulates client models to upload <italic>poisoned</italic> updates during training, thereby compromising the overall FL model. Existing attack models often assume adversaries have full knowledge of the FL procedure, including server aggregation algorithms. In contrast, we consider a more practical attack scenario in which the adversary has access only to local client data and the FL model. To address this security gap, we propose a novel attack called the Wasserstein Metric-based Model Poisoning Attack (WMPA). In this approach, adversaries embed malicious updates within aggregated ones without detection, posing a significant threat to FL applications. WMPA leverages historical information from the FL process to forecast the next round’s global model as a reference. This reference model is then used to generate an adversarial local model characterized by low accuracy but minimal perturbation. We explore the use of the Wasserstein distance in place of traditional metrics such as Euclidean distance to better disguise malicious updates. Extensive experiments show that WMPA outperforms existing model poisoning attacks and can compromise robust aggregation methods. For example, in a cross-silo setting using Krum, WMPA reduces the FL model’s accuracy from 70% to 30.1%. In a cross-device setting, it reduces accuracy from 68% to 32.6%. Furthermore, we demonstrate that the Wasserstein metric is superior to other similarity metrics in capturing the underlying structure and shape of the provided distributions. |
| format | Article |
| id | doaj-art-896f6cf8d60c4cc5ac697ac82b0cb9c6 |
| institution | Kabale University |
| issn | 2169-3536 |
| language | English |
| publishDate | 2025-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Access |
| spelling | doaj-art-896f6cf8d60c4cc5ac697ac82b0cb9c62025-08-20T03:27:22ZengIEEEIEEE Access2169-35362025-01-011311826411828010.1109/ACCESS.2025.358494811062639Exploring the Limitations of Federated Learning: A Novel Wasserstein Metric-Based Poisoning Attack on Traffic Sign ClassificationSuzan Almutairi0https://orcid.org/0000-0002-0275-3685Ahmed Barnawi1Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi ArabiaFaculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi ArabiaFederated Learning (FL) enhances privacy but remains vulnerable to model poisoning attacks, where an adversary manipulates client models to upload <italic>poisoned</italic> updates during training, thereby compromising the overall FL model. Existing attack models often assume adversaries have full knowledge of the FL procedure, including server aggregation algorithms. In contrast, we consider a more practical attack scenario in which the adversary has access only to local client data and the FL model. To address this security gap, we propose a novel attack called the Wasserstein Metric-based Model Poisoning Attack (WMPA). In this approach, adversaries embed malicious updates within aggregated ones without detection, posing a significant threat to FL applications. WMPA leverages historical information from the FL process to forecast the next round’s global model as a reference. This reference model is then used to generate an adversarial local model characterized by low accuracy but minimal perturbation. We explore the use of the Wasserstein distance in place of traditional metrics such as Euclidean distance to better disguise malicious updates. Extensive experiments show that WMPA outperforms existing model poisoning attacks and can compromise robust aggregation methods. For example, in a cross-silo setting using Krum, WMPA reduces the FL model’s accuracy from 70% to 30.1%. In a cross-device setting, it reduces accuracy from 68% to 32.6%. Furthermore, we demonstrate that the Wasserstein metric is superior to other similarity metrics in capturing the underlying structure and shape of the provided distributions.https://ieeexplore.ieee.org/document/11062639/Federated learningpoisoning attackmodel securityWasserstein metric |
| spellingShingle | Suzan Almutairi Ahmed Barnawi Exploring the Limitations of Federated Learning: A Novel Wasserstein Metric-Based Poisoning Attack on Traffic Sign Classification IEEE Access Federated learning poisoning attack model security Wasserstein metric |
| title | Exploring the Limitations of Federated Learning: A Novel Wasserstein Metric-Based Poisoning Attack on Traffic Sign Classification |
| title_full | Exploring the Limitations of Federated Learning: A Novel Wasserstein Metric-Based Poisoning Attack on Traffic Sign Classification |
| title_fullStr | Exploring the Limitations of Federated Learning: A Novel Wasserstein Metric-Based Poisoning Attack on Traffic Sign Classification |
| title_full_unstemmed | Exploring the Limitations of Federated Learning: A Novel Wasserstein Metric-Based Poisoning Attack on Traffic Sign Classification |
| title_short | Exploring the Limitations of Federated Learning: A Novel Wasserstein Metric-Based Poisoning Attack on Traffic Sign Classification |
| title_sort | exploring the limitations of federated learning a novel wasserstein metric based poisoning attack on traffic sign classification |
| topic | Federated learning poisoning attack model security Wasserstein metric |
| url | https://ieeexplore.ieee.org/document/11062639/ |
| work_keys_str_mv | AT suzanalmutairi exploringthelimitationsoffederatedlearninganovelwassersteinmetricbasedpoisoningattackontrafficsignclassification AT ahmedbarnawi exploringthelimitationsoffederatedlearninganovelwassersteinmetricbasedpoisoningattackontrafficsignclassification |