Knowledge distillation in federated learning: a comprehensive survey
Abstract Federated Learning, often known as FL, is an approach that has recently emerged as a potentially helpful method for training machine learning models in a distributed manner without the requirement of central data storage. However, when attempting to aggregate information, the inherent varie...
Saved in:
| Main Authors: | , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Springer
2025-07-01
|
| Series: | Discover Computing |
| Subjects: | |
| Online Access: | https://doi.org/10.1007/s10791-025-09657-4 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Abstract Federated Learning, often known as FL, is an approach that has recently emerged as a potentially helpful method for training machine learning models in a distributed manner without the requirement of central data storage. However, when attempting to aggregate information, the inherent variety and discrepancies in the data contributed by many FL contributors might be a substantial obstacle. In order to address this problem, researchers have offered various solutions, one of which is called knowledge distillation (KD). Such a solution seeks to transfer knowledge from a larger, more precise model to a smaller model, thus enhancing its performance. This study provides a detailed examination of the effectiveness of KD in responding to these challenges posed by FL. We comprehensively review existing research, emphasizing the benefits and limitations of using these techniques in FL and discussing the numerous challenges and research questions in this field. |
|---|---|
| ISSN: | 2948-2992 |