SparseBatch: Communication-efficient Federated Learning with Partially Homomorphic Encryption
Cross-silo federated learning (FL) enables collaborative model training among various organizations (e.g., financial or medical). It operates by aggregating local gradient updates contributed by participating clients, all the while safeguarding the privacy of sensitive data. Industrial FL framework...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Tamkang University Press
2025-01-01
|
Series: | Journal of Applied Science and Engineering |
Subjects: | |
Online Access: | http://jase.tku.edu.tw/articles/jase-202508-28-08-0003 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1841556035871965184 |
---|---|
author | Chong Wang Jing Wang Zheng Lou Linghai Kong WeiSong Tao Yun Wang |
author_facet | Chong Wang Jing Wang Zheng Lou Linghai Kong WeiSong Tao Yun Wang |
author_sort | Chong Wang |
collection | DOAJ |
description | Cross-silo federated learning (FL) enables collaborative model training among various organizations (e.g., financial or medical). It operates by aggregating local gradient updates contributed by participating clients,
all the while safeguarding the privacy of sensitive data. Industrial FL frameworks employ additively homomorphic encryption (HE) to ensure that local gradient updates are masked during aggregation, guaranteeing no update is revealed. However, this measure has resulted in significant computational and communication overhead. Encryption and decryption operations have occupied the majority of the training time. In addition, the bit length of ciphertext is two orders of magnitude larger than that of plaintext, inflating the data transfer amount. In this paper, we present a new gradient sparsification method, SparseBatch. By designing a new general gradient correction method and using Lion optimizer’s gradient quantization method, SparseBatch combines gradient sparsification and quantization. Experimental results show that compared with BatchCrypt, SparseBatch reduces the computation and communication overhead by 5×, and the accuracy reduction is less
than 1 |
format | Article |
id | doaj-art-5f18fc7523f14459a65704b154c0d25f |
institution | Kabale University |
issn | 2708-9967 2708-9975 |
language | English |
publishDate | 2025-01-01 |
publisher | Tamkang University Press |
record_format | Article |
series | Journal of Applied Science and Engineering |
spelling | doaj-art-5f18fc7523f14459a65704b154c0d25f2025-01-07T14:29:45ZengTamkang University PressJournal of Applied Science and Engineering2708-99672708-99752025-01-012881645165610.6180/jase.202508_28(8).0003SparseBatch: Communication-efficient Federated Learning with Partially Homomorphic EncryptionChong Wang0Jing Wang1Zheng Lou2Linghai Kong3WeiSong Tao4Yun Wang5State Grid Jiangsu Electric Power Co. LTD, Nanjing, 210024, ChinaSchool of computer science and technology, Southeast University, Nanjing, 211189, ChinaState Grid Jiangsu Electric Power Co. LTD, Nanjing, 210024, ChinaSchool of computer science and technology, Southeast University, Nanjing, 211189, ChinaState Grid Jiangsu Electric Power Co. LTD, Nanjing, 210024, ChinaSchool of computer science and technology, Southeast University, Nanjing, 211189, ChinaCross-silo federated learning (FL) enables collaborative model training among various organizations (e.g., financial or medical). It operates by aggregating local gradient updates contributed by participating clients, all the while safeguarding the privacy of sensitive data. Industrial FL frameworks employ additively homomorphic encryption (HE) to ensure that local gradient updates are masked during aggregation, guaranteeing no update is revealed. However, this measure has resulted in significant computational and communication overhead. Encryption and decryption operations have occupied the majority of the training time. In addition, the bit length of ciphertext is two orders of magnitude larger than that of plaintext, inflating the data transfer amount. In this paper, we present a new gradient sparsification method, SparseBatch. By designing a new general gradient correction method and using Lion optimizer’s gradient quantization method, SparseBatch combines gradient sparsification and quantization. Experimental results show that compared with BatchCrypt, SparseBatch reduces the computation and communication overhead by 5×, and the accuracy reduction is less than 1http://jase.tku.edu.tw/articles/jase-202508-28-08-0003homomorphic encryptionfederated learninggradient sparsificationgradient quantizationlion optimizer |
spellingShingle | Chong Wang Jing Wang Zheng Lou Linghai Kong WeiSong Tao Yun Wang SparseBatch: Communication-efficient Federated Learning with Partially Homomorphic Encryption Journal of Applied Science and Engineering homomorphic encryption federated learning gradient sparsification gradient quantization lion optimizer |
title | SparseBatch: Communication-efficient Federated Learning with Partially Homomorphic Encryption |
title_full | SparseBatch: Communication-efficient Federated Learning with Partially Homomorphic Encryption |
title_fullStr | SparseBatch: Communication-efficient Federated Learning with Partially Homomorphic Encryption |
title_full_unstemmed | SparseBatch: Communication-efficient Federated Learning with Partially Homomorphic Encryption |
title_short | SparseBatch: Communication-efficient Federated Learning with Partially Homomorphic Encryption |
title_sort | sparsebatch communication efficient federated learning with partially homomorphic encryption |
topic | homomorphic encryption federated learning gradient sparsification gradient quantization lion optimizer |
url | http://jase.tku.edu.tw/articles/jase-202508-28-08-0003 |
work_keys_str_mv | AT chongwang sparsebatchcommunicationefficientfederatedlearningwithpartiallyhomomorphicencryption AT jingwang sparsebatchcommunicationefficientfederatedlearningwithpartiallyhomomorphicencryption AT zhenglou sparsebatchcommunicationefficientfederatedlearningwithpartiallyhomomorphicencryption AT linghaikong sparsebatchcommunicationefficientfederatedlearningwithpartiallyhomomorphicencryption AT weisongtao sparsebatchcommunicationefficientfederatedlearningwithpartiallyhomomorphicencryption AT yunwang sparsebatchcommunicationefficientfederatedlearningwithpartiallyhomomorphicencryption |