Rate distortion optimization for adaptive gradient quantization in federated learning

Federated Learning (FL) is an emerging machine learning framework designed to preserve privacy. However, the continuous updating of model parameters over uplink channels with limited throughput leads to a huge communication overload, which is a major challenge for FL. To address this issue, we propo...

Full description

Saved in:
Bibliographic Details
Main Authors: Guojun Chen, Kaixuan Xie, Wenqiang Luo, Yinfei Xu, Lun Xin, Tiecheng Song, Jing Hu
Format: Article
Language:English
Published: KeAi Communications Co., Ltd. 2024-12-01
Series:Digital Communications and Networks
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S235286482400018X
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Federated Learning (FL) is an emerging machine learning framework designed to preserve privacy. However, the continuous updating of model parameters over uplink channels with limited throughput leads to a huge communication overload, which is a major challenge for FL. To address this issue, we propose an adaptive gradient quantization approach that enhances communication efficiency. Aiming to minimize the total communication costs, we consider both the correlation of gradients between local clients and the correlation of gradients between communication rounds, namely, in the time and space dimensions. The compression strategy is based on rate distortion theory, which allows us to find an optimal quantization strategy for the gradients. To further reduce the computational complexity, we introduce the Kalman filter into the proposed approach. Finally, numerical results demonstrate the effectiveness and robustness of the proposed rate-distortion optimization adaptive gradient quantization approach in significantly reducing the communication costs when compared to other quantization methods.
ISSN:2352-8648