Heterogeneity-Aware Personalized Federated Neural Architecture Search
Federated learning (FL), which enables collaborative learning across distributed nodes, confronts a significant heterogeneity challenge, primarily including resource heterogeneity induced by different hardware platforms, and statistical heterogeneity originating from non-IID private data distributio...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-07-01
|
| Series: | Entropy |
| Subjects: | |
| Online Access: | https://www.mdpi.com/1099-4300/27/7/759 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849733982514577408 |
|---|---|
| author | An Yang Ying Liu |
| author_facet | An Yang Ying Liu |
| author_sort | An Yang |
| collection | DOAJ |
| description | Federated learning (FL), which enables collaborative learning across distributed nodes, confronts a significant heterogeneity challenge, primarily including resource heterogeneity induced by different hardware platforms, and statistical heterogeneity originating from non-IID private data distributions among clients. Neural architecture search (NAS), particularly one-shot NAS, holds great promise for automatically designing optimal personalized models tailored to such heterogeneous scenarios. However, the coexistence of both resource and statistical heterogeneity destabilizes the training of the one-shot supernet, impairs the evaluation of candidate architectures, and ultimately hinders the discovery of optimal personalized models. To address this problem, we propose a heterogeneity-aware personalized federated NAS (HAPFNAS) method. First, we leverage lightweight knowledge models to distill knowledge from clients to server-side supernet, thereby effectively mitigating the effects of heterogeneity and enhancing the training stability. Then, we build random-forest-based personalized performance predictors to enable the efficient evaluation of candidate architectures across clients. Furthermore, we develop a model-heterogeneous FL algorithm called heteroFedAvg to facilitate collaborative model training for the discovered personalized models. Comprehensive experiments on CIFAR-10/100 and Tiny-ImageNet classification datasets demonstrate the effectiveness of our HAPFNAS, compared to state-of-the-art federated NAS methods. |
| format | Article |
| id | doaj-art-58cfc318f152402baf6dfd215f7013a1 |
| institution | DOAJ |
| issn | 1099-4300 |
| language | English |
| publishDate | 2025-07-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Entropy |
| spelling | doaj-art-58cfc318f152402baf6dfd215f7013a12025-08-20T03:07:55ZengMDPI AGEntropy1099-43002025-07-0127775910.3390/e27070759Heterogeneity-Aware Personalized Federated Neural Architecture SearchAn Yang0Ying Liu1College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, ChinaCollege of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, ChinaFederated learning (FL), which enables collaborative learning across distributed nodes, confronts a significant heterogeneity challenge, primarily including resource heterogeneity induced by different hardware platforms, and statistical heterogeneity originating from non-IID private data distributions among clients. Neural architecture search (NAS), particularly one-shot NAS, holds great promise for automatically designing optimal personalized models tailored to such heterogeneous scenarios. However, the coexistence of both resource and statistical heterogeneity destabilizes the training of the one-shot supernet, impairs the evaluation of candidate architectures, and ultimately hinders the discovery of optimal personalized models. To address this problem, we propose a heterogeneity-aware personalized federated NAS (HAPFNAS) method. First, we leverage lightweight knowledge models to distill knowledge from clients to server-side supernet, thereby effectively mitigating the effects of heterogeneity and enhancing the training stability. Then, we build random-forest-based personalized performance predictors to enable the efficient evaluation of candidate architectures across clients. Furthermore, we develop a model-heterogeneous FL algorithm called heteroFedAvg to facilitate collaborative model training for the discovered personalized models. Comprehensive experiments on CIFAR-10/100 and Tiny-ImageNet classification datasets demonstrate the effectiveness of our HAPFNAS, compared to state-of-the-art federated NAS methods.https://www.mdpi.com/1099-4300/27/7/759neural architecture searchneural networkfederated learningpersonalization |
| spellingShingle | An Yang Ying Liu Heterogeneity-Aware Personalized Federated Neural Architecture Search Entropy neural architecture search neural network federated learning personalization |
| title | Heterogeneity-Aware Personalized Federated Neural Architecture Search |
| title_full | Heterogeneity-Aware Personalized Federated Neural Architecture Search |
| title_fullStr | Heterogeneity-Aware Personalized Federated Neural Architecture Search |
| title_full_unstemmed | Heterogeneity-Aware Personalized Federated Neural Architecture Search |
| title_short | Heterogeneity-Aware Personalized Federated Neural Architecture Search |
| title_sort | heterogeneity aware personalized federated neural architecture search |
| topic | neural architecture search neural network federated learning personalization |
| url | https://www.mdpi.com/1099-4300/27/7/759 |
| work_keys_str_mv | AT anyang heterogeneityawarepersonalizedfederatedneuralarchitecturesearch AT yingliu heterogeneityawarepersonalizedfederatedneuralarchitecturesearch |