FedDyH: A Multi-Policy with GA Optimization Framework for Dynamic Heterogeneous Federated Learning

Federated learning (FL) is a distributed learning technique that ensures data privacy and has shown significant potential in cross-institutional image analysis. However, existing methods struggle with the inherent dynamic heterogeneity of real-world data, such as changes in cellular differentiation...

Full description

Saved in:
Bibliographic Details
Main Authors: Xuhua Zhao, Yongming Zheng, Jiaxiang Wan, Yehong Li, Donglin Zhu, Zhenyu Xu, Huijuan Lu
Format: Article
Language:English
Published: MDPI AG 2025-03-01
Series:Biomimetics
Subjects:
Online Access:https://www.mdpi.com/2313-7673/10/3/185
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Federated learning (FL) is a distributed learning technique that ensures data privacy and has shown significant potential in cross-institutional image analysis. However, existing methods struggle with the inherent dynamic heterogeneity of real-world data, such as changes in cellular differentiation during disease progression or feature distribution shifts due to different imaging devices. This dynamic heterogeneity can cause catastrophic forgetting, leading to reduced performance in medical predictions across stages. Unlike previous federated learning studies that paid insufficient attention to dynamic heterogeneity, this paper proposes the FedDyH framework to address this challenge. Inspired by the adaptive regulation mechanisms of biological systems, this framework incorporates several core modules to tackle the issues arising from dynamic heterogeneity. First, the framework simulates intercellular information transfer through cross-client knowledge distillation, preserving local features while mitigating knowledge forgetting. Additionally, a dynamic regularization term is designed in which the strength can be adaptively adjusted based on real-world conditions. This mechanism resembles the role of regulatory T cells in the immune system, balancing global model convergence with local specificity adjustments to enhance the robustness of the global model while preventing interference from diverse client features. Finally, the framework introduces a genetic algorithm (GA) to simulate biological evolution, leveraging mechanisms such as gene selection, crossover, and mutation to optimize hyperparameter configurations. This enables the model to adaptively find the optimal hyperparameters in an ever-changing environment, thereby improving both adaptability and performance. Prior to this work, few studies have explored the use of optimization algorithms for hyperparameter tuning in federated learning. Experimental results demonstrate that the FedDyH framework improves accuracy compared to the SOTA baseline FedDecorr by 2.59%, 0.55%, and 5.79% on the MNIST, Fashion-MNIST, and CIFAR-10 benchmark datasets, respectively. This framework effectively addresses data heterogeneity issues in dynamic heterogeneous environments, providing an innovative solution for achieving more stable and accurate distributed federated learning.
ISSN:2313-7673