FL-Joint: joint aligning features and labels in federated learning for data heterogeneity

Abstract Federated learning is a distributed machine learning paradigm that trains a shared model using data from various clients, it faces a core challenge in data heterogeneity arising from diverse client settings and environments. Existing methods typically focus on weight divergence mitigation a...

Full description

Saved in:
Bibliographic Details
Main Authors: Wenxin Chen, Jinrui Zhang, Deyu Zhang
Format: Article
Language:English
Published: Springer 2024-11-01
Series:Complex & Intelligent Systems
Subjects:
Online Access:https://doi.org/10.1007/s40747-024-01636-4
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Federated learning is a distributed machine learning paradigm that trains a shared model using data from various clients, it faces a core challenge in data heterogeneity arising from diverse client settings and environments. Existing methods typically focus on weight divergence mitigation and aggregation strategy enhancements, they overlook the mixed skew in label and feature distributions prevalent in real-world data. To address this, we present FL-Joint, a federated learning framework that aligns label and feature distributions using auxiliary loss functions. This framework involves a class-balanced classifier as the local model. It aligns label and feature distributions locally by using auxiliary loss functions based on class-conditional information and pseudo-labels. This alignment drives client feature distributions to converge towards a shared feature space, refining decision boundaries and boosting the global model’s generalization ability. Extensive experiments across diverse datasets and heterogeneous data settings show that our method significantly improves accuracy and convergence speed compared to baseline approaches.
ISSN:2199-4536
2198-6053