PP-QADMM: A Dual-Driven Perturbation and Quantized ADMM for Privacy Preserving and Communication-Efficient Federated Learning

This article presents Privacy-Preserving and Quantized ADMM (PP-QADMM), a novel federated learning (FL) algorithm that is both privacy-preserving and communication-efficient, built upon the Alternating Direction Method of Multipliers (ADMM). PP-QADMM enhances privacy through three core mechanisms. F...

Full description

Saved in:
Bibliographic Details
Main Author: Anis Elgabli
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Open Journal of the Communications Society
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10982181/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This article presents Privacy-Preserving and Quantized ADMM (PP-QADMM), a novel federated learning (FL) algorithm that is both privacy-preserving and communication-efficient, built upon the Alternating Direction Method of Multipliers (ADMM). PP-QADMM enhances privacy through three core mechanisms. First, dual variables are randomly initialized at the workers&#x2019; side. Second, these dual variables are updated solely by the workers throughout the learning process. Third, only a combined representation of the primal and dual variables is transmitted to the parameter server (PS). This design prevents the PS from performing model inversion, as the individual model updates are obfuscated within the transmitted combination. The algorithm effectively merges principles from differential privacy (DP) and secure aggregation. Specifically, the dual variables act as perturbation noise, ensuring that each worker&#x2019;s model update satisfies <inline-formula> <tex-math notation="LaTeX">$(\epsilon , \delta)$ </tex-math></inline-formula>-DP. Importantly, during aggregation at the PS, this noise cancels out, enabling an accurate recovery of the quantized global model while safeguarding individual contributions. PP-QADMM inherits the theoretical privacy guarantees of DP, yet with no loss in the performance, and inherits the secure aggregation capability of multi-party computation (MPC) technique, yet without incurring additional communication or computation overhead. To further improve communication efficiency, workers quantize their updates before transmission at each iteration. The quantization scheme is designed such that, as <inline-formula> <tex-math notation="LaTeX">$k \rightarrow \infty $ </tex-math></inline-formula>, the quantization error vanishes asymptotically. We provide a rigorous theoretical proof of convergence, showing that PP-QADMM converges to the optimal solution for convex problems while achieving a convergence rate comparable to standard ADMM, but with significantly lower communication and energy costs, and robust privacy protection. Extensive numerical experiments on a convex linear regression task validate the effectiveness of PP-QADMM. On a publicly available dataset with 100 workers, our results show that PP-QADMM transmits only 1/16th the number of bits required by standard ADMM, while achieving a loss value of <inline-formula> <tex-math notation="LaTeX">$1 \times 10^{-10}$ </tex-math></inline-formula>, a level of accuracy unattainable with DP-ADMM alone, due to its inherent utility-privacy trade-off.
ISSN:2644-125X