Resource-Efficient Personalization in Federated Learning With Closed-Form Classifiers

Statistical heterogeneity in Federated Learning (FL) often leads to client drift and biased local solutions. Prior work in the literature shows that client drift particularly affects the parameters of the classification layer, hindering both convergence and accuracy. While Personalized FL (PFL) addr...

Full description

Saved in:
Bibliographic Details
Main Authors: Eros Fani, Raffaello Camoriano, Barbara Caputo, Marco Ciccone
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10946159/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Statistical heterogeneity in Federated Learning (FL) often leads to client drift and biased local solutions. Prior work in the literature shows that client drift particularly affects the parameters of the classification layer, hindering both convergence and accuracy. While Personalized FL (PFL) addresses this by allowing client-specific models, it can overlook valuable global knowledge. This paper introduces Federated Recursive Ridge Regression (<monospace>Fed3R</monospace>), a fast and efficient method to construct a closed-form classifier that effectively incorporates global knowledge while being inherently robust to statistical heterogeneity. <monospace>Fed3R</monospace> leverages a pre-trained feature extractor and a recursive ridge regression formulation to achieve exact aggregation of local classifiers and recover the centralized solution. We demonstrate that <monospace>Fed3R</monospace> serves as a robust initialization for further fine-tuning with various FL and PFL algorithms, accelerating convergence and boosting performance. Furthermore, we propose Only Local Labels (<monospace>OLL</monospace>), a novel PFL technique that simplifies local classifiers by focusing only on locally relevant classes, preventing misclassifications and improving efficiency. Our empirical evaluation on real-world cross-device datasets shows that <monospace>Fed3R</monospace>, combined with <monospace>OLL</monospace>, significantly improves performance and reduces training costs in heterogeneous FL and PFL scenarios.
ISSN:2169-3536