A comprehensive analysis of model poisoning attacks in federated learning for autonomous vehicles: A benchmark study
Due to the increase in data regulations amid rising privacy concerns, the machine learning (ML) community has proposed a novel distributed training paradigm called federated learning (FL). FL enables untrusted groups of clients to train collaboratively on an FL model without the need to share privat...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Elsevier
2024-12-01
|
| Series: | Results in Engineering |
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S2590123024015494 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850054120649523200 |
|---|---|
| author | Suzan Almutairi Ahmed Barnawi |
| author_facet | Suzan Almutairi Ahmed Barnawi |
| author_sort | Suzan Almutairi |
| collection | DOAJ |
| description | Due to the increase in data regulations amid rising privacy concerns, the machine learning (ML) community has proposed a novel distributed training paradigm called federated learning (FL). FL enables untrusted groups of clients to train collaboratively on an FL model without the need to share private data. The rise of connected vehicles has paved the way for a new era of data-driven traffic management, but it also exposes vulnerabilities to cyber attacks that threaten safety and security. One such security vulnerability is malicious clients who can upload “poisoned” updates during training, causing the FL model's performance to degrade, potentially resulting in catastrophic outcomes.This paper presents a thorough benchmarking study designed to critically analyse and evaluate the effectiveness of Byzantine-robust aggregations as a method to counter state-of-the-art untargeted model poisoning attacks, using an Autonomous Vehicles (AV) benchmark dataset. The research objectives are: (1) to assess the vulnerability of Byzantine-robust aggregations against model poisoning attacks; (2) to evaluate the impact of model poisoning attacks using different practical scenarios involving changing the vector perturbations and data distributions across diverse datasets; and (3) to understand the scale of degradation in performance and efficacy during attacks involving malicious clients. Additionally, this study tests the commonly held belief that Independent and Identically Distributed (IID) data distribution is universally more secure than non-IID in different FL scenarios. To address these objectives, we conduct extensive experiments using: (1) three benchmark datasets of different sizes sourced from two different domains to simulate heterogeneous statistics in real-world scenarios (IID, non-IID, and imbalanced non-IID); and (2) two federated settings (cross-device and cross-silo) with realistic threat models, adversarial capabilities, and FL parameters. One of the main experimental results is that client-selection strategies in cross-device settings can offer a simple yet robust defense. Finally, conclusions and findings are set out (some of which contradict claims made in previous studies) with recommendations for potential future directions in the critical domain. |
| format | Article |
| id | doaj-art-52c072de0c8d47f6b08c992391623659 |
| institution | DOAJ |
| issn | 2590-1230 |
| language | English |
| publishDate | 2024-12-01 |
| publisher | Elsevier |
| record_format | Article |
| series | Results in Engineering |
| spelling | doaj-art-52c072de0c8d47f6b08c9923916236592025-08-20T02:52:21ZengElsevierResults in Engineering2590-12302024-12-012410329510.1016/j.rineng.2024.103295A comprehensive analysis of model poisoning attacks in federated learning for autonomous vehicles: A benchmark studySuzan Almutairi0Ahmed Barnawi1Corresponding author.; Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi ArabiaFaculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi ArabiaDue to the increase in data regulations amid rising privacy concerns, the machine learning (ML) community has proposed a novel distributed training paradigm called federated learning (FL). FL enables untrusted groups of clients to train collaboratively on an FL model without the need to share private data. The rise of connected vehicles has paved the way for a new era of data-driven traffic management, but it also exposes vulnerabilities to cyber attacks that threaten safety and security. One such security vulnerability is malicious clients who can upload “poisoned” updates during training, causing the FL model's performance to degrade, potentially resulting in catastrophic outcomes.This paper presents a thorough benchmarking study designed to critically analyse and evaluate the effectiveness of Byzantine-robust aggregations as a method to counter state-of-the-art untargeted model poisoning attacks, using an Autonomous Vehicles (AV) benchmark dataset. The research objectives are: (1) to assess the vulnerability of Byzantine-robust aggregations against model poisoning attacks; (2) to evaluate the impact of model poisoning attacks using different practical scenarios involving changing the vector perturbations and data distributions across diverse datasets; and (3) to understand the scale of degradation in performance and efficacy during attacks involving malicious clients. Additionally, this study tests the commonly held belief that Independent and Identically Distributed (IID) data distribution is universally more secure than non-IID in different FL scenarios. To address these objectives, we conduct extensive experiments using: (1) three benchmark datasets of different sizes sourced from two different domains to simulate heterogeneous statistics in real-world scenarios (IID, non-IID, and imbalanced non-IID); and (2) two federated settings (cross-device and cross-silo) with realistic threat models, adversarial capabilities, and FL parameters. One of the main experimental results is that client-selection strategies in cross-device settings can offer a simple yet robust defense. Finally, conclusions and findings are set out (some of which contradict claims made in previous studies) with recommendations for potential future directions in the critical domain.http://www.sciencedirect.com/science/article/pii/S2590123024015494Autonomous vehiclesFederated learningPoisoning attacksByzantine aggregationBenchmarkModel security |
| spellingShingle | Suzan Almutairi Ahmed Barnawi A comprehensive analysis of model poisoning attacks in federated learning for autonomous vehicles: A benchmark study Results in Engineering Autonomous vehicles Federated learning Poisoning attacks Byzantine aggregation Benchmark Model security |
| title | A comprehensive analysis of model poisoning attacks in federated learning for autonomous vehicles: A benchmark study |
| title_full | A comprehensive analysis of model poisoning attacks in federated learning for autonomous vehicles: A benchmark study |
| title_fullStr | A comprehensive analysis of model poisoning attacks in federated learning for autonomous vehicles: A benchmark study |
| title_full_unstemmed | A comprehensive analysis of model poisoning attacks in federated learning for autonomous vehicles: A benchmark study |
| title_short | A comprehensive analysis of model poisoning attacks in federated learning for autonomous vehicles: A benchmark study |
| title_sort | comprehensive analysis of model poisoning attacks in federated learning for autonomous vehicles a benchmark study |
| topic | Autonomous vehicles Federated learning Poisoning attacks Byzantine aggregation Benchmark Model security |
| url | http://www.sciencedirect.com/science/article/pii/S2590123024015494 |
| work_keys_str_mv | AT suzanalmutairi acomprehensiveanalysisofmodelpoisoningattacksinfederatedlearningforautonomousvehiclesabenchmarkstudy AT ahmedbarnawi acomprehensiveanalysisofmodelpoisoningattacksinfederatedlearningforautonomousvehiclesabenchmarkstudy AT suzanalmutairi comprehensiveanalysisofmodelpoisoningattacksinfederatedlearningforautonomousvehiclesabenchmarkstudy AT ahmedbarnawi comprehensiveanalysisofmodelpoisoningattacksinfederatedlearningforautonomousvehiclesabenchmarkstudy |