Resource Allocation for Federated Learning With Highly Distorted Model

Information loss has emerged and escalated as the information bottleneck of a deep encryption model surpasses the entropy of the data and reduces the data reconstruction efficiency at the decoder (i.e., lossy compression and high data encryption). Therefore, existing communication-effective federate...

Full description

Saved in:
Bibliographic Details
Main Authors: Ryu Junewoo, Nguyen Xuan Tung, Minh-Duong Nguyen, Quang-Vinh do, Won-Joo Hwang
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10946888/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Information loss has emerged and escalated as the information bottleneck of a deep encryption model surpasses the entropy of the data and reduces the data reconstruction efficiency at the decoder (i.e., lossy compression and high data encryption). Therefore, existing communication-effective federated learning (FL) approaches (e.g., model quantization, data sparsification, and model compression) incurred a considerable trade-off between communication efficiency and global convergence rate when an extreme encryption rate is applied. Nonetheless, the trade-off becomes less severe as the FL network expands. By utilizing this fact, we formulate an optimization problem for encryption-aided FL that captures the relationship between the distortion rate, the number of participating Internet-of-Things (IoT) devices, and the convergence rate. The purpose of the formulated FL optimization problem is to simultaneously optimize both the energy efficiency and the FL performance at once while using various model encryption techniques. Thereafter, our theoretical analysis shows that by actively controlling the number of participating IoT devices, we can avoid the training divergence of encryption-assisted FL while maintaining communication efficiency.
ISSN:2169-3536