Decentralized Federated Learning with Prototype Exchange

As AI applications become increasingly integrated into daily life, protecting user privacy while enabling collaborative model training has become a crucial challenge, especially in decentralized edge computing environments. Traditional federated learning (FL) approaches, which rely on centralized mo...

Full description

Saved in:
Bibliographic Details
Main Authors: Lu Qi, Haoze Chen, Hongliang Zou, Shaohua Chen, Xiaoying Zhang, Hongyan Chen
Format: Article
Language:English
Published: MDPI AG 2025-01-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/13/2/237
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:As AI applications become increasingly integrated into daily life, protecting user privacy while enabling collaborative model training has become a crucial challenge, especially in decentralized edge computing environments. Traditional federated learning (FL) approaches, which rely on centralized model aggregation, struggle in such settings due to bandwidth limitations, data heterogeneity, and varying device capabilities among edge nodes. To address these issues, we propose PearFL, a decentralized FL framework that enhances collaboration and model generalization by introducing prototype exchange mechanisms. PearFL allows each client to share lightweight prototype information with its neighbors, minimizing communication overhead and improving model consistency across distributed devices. Experimental evaluations on benchmark datasets, including MNIST, CIFAR-10, and CIFAR-100, demonstrate that PearFL achieves superior communication efficiency, convergence speed, and accuracy compared to conventional FL methods. These results confirm PearFL’s efficacy as a scalable solution for decentralized learning in heterogeneous and resource-constrained environments.
ISSN:2227-7390