A Novel Multi-Server Federated Learning Framework in Vehicular Edge Computing
Federated learning (FL) has emerged as a powerful approach for privacy-preserving model training in autonomous vehicle networks, where real-world deployments rely on multiple roadside units (RSUs) serving heterogeneous clients with intermittent connectivity. While most research focuses on single-ser...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-07-01
|
| Series: | Future Internet |
| Subjects: | |
| Online Access: | https://www.mdpi.com/1999-5903/17/7/315 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849418564916740096 |
|---|---|
| author | Fateme Mazloomi Shahram Shah Heydari Khalil El-Khatib |
| author_facet | Fateme Mazloomi Shahram Shah Heydari Khalil El-Khatib |
| author_sort | Fateme Mazloomi |
| collection | DOAJ |
| description | Federated learning (FL) has emerged as a powerful approach for privacy-preserving model training in autonomous vehicle networks, where real-world deployments rely on multiple roadside units (RSUs) serving heterogeneous clients with intermittent connectivity. While most research focuses on single-server or hierarchical cloud-based FL, multi-server FL can alleviate the communication bottlenecks of traditional setups. To this end, we propose an edge-based, multi-server FL (MS-FL) framework that combines performance-driven aggregation at each server—including statistical weighting of peer updates and outlier mitigation—with an application layer handover protocol that preserves model updates when vehicles move between RSU coverage areas. We evaluate MS-FL on both MNIST and GTSRB benchmarks under shard- and Dirichlet-based non-IID splits, comparing it against single-server FL and a two-layer edge-plus-cloud baseline. Over multiple communication rounds, MS-FL with the Statistical Performance-Aware Aggregation method and Dynamic Weighted Averaging Aggregation achieved up to a 20-percentage-point improvement in accuracy and consistent gains in precision, recall, and F1-score (95% confidence), while matching the low latency of edge-only schemes and avoiding the extra model transfer delays of cloud-based aggregation. These results demonstrate that coordinated cooperation among servers based on model quality and seamless handovers can accelerate convergence, mitigate data heterogeneity, and deliver robust, privacy-aware learning in connected vehicle environments. |
| format | Article |
| id | doaj-art-b66d3c08adaa41adbfc31a3255c8c162 |
| institution | Kabale University |
| issn | 1999-5903 |
| language | English |
| publishDate | 2025-07-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Future Internet |
| spelling | doaj-art-b66d3c08adaa41adbfc31a3255c8c1622025-08-20T03:32:26ZengMDPI AGFuture Internet1999-59032025-07-0117731510.3390/fi17070315A Novel Multi-Server Federated Learning Framework in Vehicular Edge ComputingFateme Mazloomi0Shahram Shah Heydari1Khalil El-Khatib2Faculty of Business and IT, University of Ontario Institute of Technology, Oshawa, ON L1G 0C5, CanadaFaculty of Business and IT, University of Ontario Institute of Technology, Oshawa, ON L1G 0C5, CanadaFaculty of Business and IT, University of Ontario Institute of Technology, Oshawa, ON L1G 0C5, CanadaFederated learning (FL) has emerged as a powerful approach for privacy-preserving model training in autonomous vehicle networks, where real-world deployments rely on multiple roadside units (RSUs) serving heterogeneous clients with intermittent connectivity. While most research focuses on single-server or hierarchical cloud-based FL, multi-server FL can alleviate the communication bottlenecks of traditional setups. To this end, we propose an edge-based, multi-server FL (MS-FL) framework that combines performance-driven aggregation at each server—including statistical weighting of peer updates and outlier mitigation—with an application layer handover protocol that preserves model updates when vehicles move between RSU coverage areas. We evaluate MS-FL on both MNIST and GTSRB benchmarks under shard- and Dirichlet-based non-IID splits, comparing it against single-server FL and a two-layer edge-plus-cloud baseline. Over multiple communication rounds, MS-FL with the Statistical Performance-Aware Aggregation method and Dynamic Weighted Averaging Aggregation achieved up to a 20-percentage-point improvement in accuracy and consistent gains in precision, recall, and F1-score (95% confidence), while matching the low latency of edge-only schemes and avoiding the extra model transfer delays of cloud-based aggregation. These results demonstrate that coordinated cooperation among servers based on model quality and seamless handovers can accelerate convergence, mitigate data heterogeneity, and deliver robust, privacy-aware learning in connected vehicle environments.https://www.mdpi.com/1999-5903/17/7/315federated learningmulti-servermobilityhand overaggregationvehicular edge computing |
| spellingShingle | Fateme Mazloomi Shahram Shah Heydari Khalil El-Khatib A Novel Multi-Server Federated Learning Framework in Vehicular Edge Computing Future Internet federated learning multi-server mobility hand over aggregation vehicular edge computing |
| title | A Novel Multi-Server Federated Learning Framework in Vehicular Edge Computing |
| title_full | A Novel Multi-Server Federated Learning Framework in Vehicular Edge Computing |
| title_fullStr | A Novel Multi-Server Federated Learning Framework in Vehicular Edge Computing |
| title_full_unstemmed | A Novel Multi-Server Federated Learning Framework in Vehicular Edge Computing |
| title_short | A Novel Multi-Server Federated Learning Framework in Vehicular Edge Computing |
| title_sort | novel multi server federated learning framework in vehicular edge computing |
| topic | federated learning multi-server mobility hand over aggregation vehicular edge computing |
| url | https://www.mdpi.com/1999-5903/17/7/315 |
| work_keys_str_mv | AT fatememazloomi anovelmultiserverfederatedlearningframeworkinvehicularedgecomputing AT shahramshahheydari anovelmultiserverfederatedlearningframeworkinvehicularedgecomputing AT khalilelkhatib anovelmultiserverfederatedlearningframeworkinvehicularedgecomputing AT fatememazloomi novelmultiserverfederatedlearningframeworkinvehicularedgecomputing AT shahramshahheydari novelmultiserverfederatedlearningframeworkinvehicularedgecomputing AT khalilelkhatib novelmultiserverfederatedlearningframeworkinvehicularedgecomputing |