Quantization-based chained privacy-preserving federated learning
Abstract Federated Learning (FL) is an advanced distributed machine learning framework crucial in protecting data privacy and security. By enabling multiple participants to train models while keeping their data local collaboratively, FL effectively mitigates the risks associated with centralized sto...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-05-01
|
| Series: | Scientific Reports |
| Subjects: | |
| Online Access: | https://doi.org/10.1038/s41598-025-01420-5 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850273213235331072 |
|---|---|
| author | Ya Liu Shumin Wu Yibo Li Fengyu Zhao Yanli Ren |
| author_facet | Ya Liu Shumin Wu Yibo Li Fengyu Zhao Yanli Ren |
| author_sort | Ya Liu |
| collection | DOAJ |
| description | Abstract Federated Learning (FL) is an advanced distributed machine learning framework crucial in protecting data privacy and security. By enabling multiple participants to train models while keeping their data local collaboratively, FL effectively mitigates the risks associated with centralized storage and sharing of raw data. However, traditional FL schemes face significant challenges regarding communication efficiency, computational costs, and privacy preservation. For instance, its communication and computational overhead in edge computing scenarios is often excessively high, hindering real-time applications. This paper proposes an innovative federated learning framework, Q-Chain FL, integrating quantization compression techniques into a chained FL architecture. This Q-Chain FL scheme adopts efficient compression and transmission of model parameter differences at the user node and executes seamless decompression and aggregation at the server node. Experiments on several publicly available datasets, including MNIST, CIFAR-10, and CelebA, demonstrate low communication and computational overhead, fast convergence speed, and high security of Q-Chain FL. Compared to traditional FedAvg and Chain-PPFL, Q-Chain FL reduces communication overhead by approximately 62.5% and 44.7%, respectively. These results underscore the robustness and adaptability of Q-Chain FL in various datasets and real-world learning scenarios. |
| format | Article |
| id | doaj-art-b1bf0cdc783445a99b5aa05ae952a987 |
| institution | OA Journals |
| issn | 2045-2322 |
| language | English |
| publishDate | 2025-05-01 |
| publisher | Nature Portfolio |
| record_format | Article |
| series | Scientific Reports |
| spelling | doaj-art-b1bf0cdc783445a99b5aa05ae952a9872025-08-20T01:51:35ZengNature PortfolioScientific Reports2045-23222025-05-0115111510.1038/s41598-025-01420-5Quantization-based chained privacy-preserving federated learningYa Liu0Shumin Wu1Yibo Li2Fengyu Zhao3Yanli Ren4The Department of Computer Science and Engineering, University of Shanghai for Science and TechnologyThe Department of Computer Science and Engineering, University of Shanghai for Science and TechnologyThe Department of Computer Science and Engineering, University of Shanghai for Science and TechnologyThe Department of Information and Intelligence Engineering, Shanghai Publishing and Printing CollegeThe School of Communication and Information Engineering, Shanghai UniversityAbstract Federated Learning (FL) is an advanced distributed machine learning framework crucial in protecting data privacy and security. By enabling multiple participants to train models while keeping their data local collaboratively, FL effectively mitigates the risks associated with centralized storage and sharing of raw data. However, traditional FL schemes face significant challenges regarding communication efficiency, computational costs, and privacy preservation. For instance, its communication and computational overhead in edge computing scenarios is often excessively high, hindering real-time applications. This paper proposes an innovative federated learning framework, Q-Chain FL, integrating quantization compression techniques into a chained FL architecture. This Q-Chain FL scheme adopts efficient compression and transmission of model parameter differences at the user node and executes seamless decompression and aggregation at the server node. Experiments on several publicly available datasets, including MNIST, CIFAR-10, and CelebA, demonstrate low communication and computational overhead, fast convergence speed, and high security of Q-Chain FL. Compared to traditional FedAvg and Chain-PPFL, Q-Chain FL reduces communication overhead by approximately 62.5% and 44.7%, respectively. These results underscore the robustness and adaptability of Q-Chain FL in various datasets and real-world learning scenarios.https://doi.org/10.1038/s41598-025-01420-5Federated learning (FL)QuantizationPrivacy-preservingLightweight |
| spellingShingle | Ya Liu Shumin Wu Yibo Li Fengyu Zhao Yanli Ren Quantization-based chained privacy-preserving federated learning Scientific Reports Federated learning (FL) Quantization Privacy-preserving Lightweight |
| title | Quantization-based chained privacy-preserving federated learning |
| title_full | Quantization-based chained privacy-preserving federated learning |
| title_fullStr | Quantization-based chained privacy-preserving federated learning |
| title_full_unstemmed | Quantization-based chained privacy-preserving federated learning |
| title_short | Quantization-based chained privacy-preserving federated learning |
| title_sort | quantization based chained privacy preserving federated learning |
| topic | Federated learning (FL) Quantization Privacy-preserving Lightweight |
| url | https://doi.org/10.1038/s41598-025-01420-5 |
| work_keys_str_mv | AT yaliu quantizationbasedchainedprivacypreservingfederatedlearning AT shuminwu quantizationbasedchainedprivacypreservingfederatedlearning AT yiboli quantizationbasedchainedprivacypreservingfederatedlearning AT fengyuzhao quantizationbasedchainedprivacypreservingfederatedlearning AT yanliren quantizationbasedchainedprivacypreservingfederatedlearning |