SparseBatch: Communication-efficient Federated Learning with Partially Homomorphic Encryption

Cross-silo federated learning (FL) enables collaborative model training among various organizations (e.g., financial or medical). It operates by aggregating local gradient updates contributed by participating clients, all the while safeguarding the privacy of sensitive data. Industrial FL framework...

Full description

Saved in:
Bibliographic Details
Main Authors: Chong Wang, Jing Wang, Zheng Lou, Linghai Kong, WeiSong Tao, Yun Wang
Format: Article
Language:English
Published: Tamkang University Press 2025-01-01
Series:Journal of Applied Science and Engineering
Subjects:
Online Access:http://jase.tku.edu.tw/articles/jase-202508-28-08-0003
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Cross-silo federated learning (FL) enables collaborative model training among various organizations (e.g., financial or medical). It operates by aggregating local gradient updates contributed by participating clients, all the while safeguarding the privacy of sensitive data. Industrial FL frameworks employ additively homomorphic encryption (HE) to ensure that local gradient updates are masked during aggregation, guaranteeing no update is revealed. However, this measure has resulted in significant computational and communication overhead. Encryption and decryption operations have occupied the majority of the training time. In addition, the bit length of ciphertext is two orders of magnitude larger than that of plaintext, inflating the data transfer amount. In this paper, we present a new gradient sparsification method, SparseBatch. By designing a new general gradient correction method and using Lion optimizer’s gradient quantization method, SparseBatch combines gradient sparsification and quantization. Experimental results show that compared with BatchCrypt, SparseBatch reduces the computation and communication overhead by 5×, and the accuracy reduction is less than 1
ISSN:2708-9967
2708-9975