DFL topology optimization based on peer weighting mechanism and graph neural network in digital twin platform
Abstract Decentralized federated learning (DFL) represents a distributed learning framework where participating nodes independently train local models and exchange model updates with proximate peers, circumventing the reliance on a centralized orchestrator. This paradigm effectively mitigates server...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Springer
2025-04-01
|
| Series: | Complex & Intelligent Systems |
| Subjects: | |
| Online Access: | https://doi.org/10.1007/s40747-025-01887-9 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Abstract Decentralized federated learning (DFL) represents a distributed learning framework where participating nodes independently train local models and exchange model updates with proximate peers, circumventing the reliance on a centralized orchestrator. This paradigm effectively mitigates server-induced bottlenecks and eliminates single points of failure, which are inherent limitations of centralized federated learning architectures. However, DFL encounters significant challenges in attaining global model convergence due to inherent statistical heterogeneity across nodes and the dynamic nature of network topologies. For the first time, in this paper, we present a topology optimization framework for DFL that integrates a peer weighting mechanism with graph neural networks (GNNs) within a digital twin platform. The proposed approach leverages local model performance metrics and training latency as input factors to dynamically construct an optimized topology that balances computational efficiency and model performance. Specifically, we employ Particle Swarm Optimization to derive node-specific peer weight matrices and utilize a GNN to refine the underlying mesh topology based on these weights. Comprehensive experimental analyses conducted on benchmark datasets demonstrate the superiority of the proposed framework in achieving accelerated convergence and enhanced accuracy across diverse nodes. Additionally, comparative evaluations under IID and Non-IID data distributions substantiate the robustness and adaptability of the approach in heterogeneous learning environments, underscoring its potential to advance decentralized learning paradigms. |
|---|---|
| ISSN: | 2199-4536 2198-6053 |