Review of communication optimization methods in federated learning

With the development and popularization of artificial intelligence technologies represented by deep learning, the security issues they continuously expose have become a huge challenge affecting cyberspace security. Traditional cloud-centric distributed machine learning, which trains models or optimi...

Full description

Saved in:
Bibliographic Details
Main Authors: YANG Zhikai, LIU Yaping, ZHANG Shuo, SUN Zhe, YAN Dingyu
Format: Article
Language:English
Published: POSTS&TELECOM PRESS Co., LTD 2024-12-01
Series:网络与信息安全学报
Subjects:
Online Access:http://www.cjnis.com.cn/thesisDetails#10.11959/j.issn.2096-109x.2024077
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the development and popularization of artificial intelligence technologies represented by deep learning, the security issues they continuously expose have become a huge challenge affecting cyberspace security. Traditional cloud-centric distributed machine learning, which trains models or optimizes model performance by collecting data from participating parties, is susceptible to security attacks and privacy attacks during the data exchange process, leading to consequences such as a decline in overall system efficiency or the leakage of private data. Federated learning, as a distributed machine learning paradigm with privacy protection capabilities, exchanges model parameters through frequent communication between clients and parameter servers, training a joint model without the raw data leaving the local area. This greatly reduces the risk of private data leakage and ensures data security to a certain extent. However, as deep learning models become larger and federated learning tasks more complex, communication overhead also increases, eventually becoming a barrier to the application of federated learning. Therefore, the exploration of communication optimization methods for federated learning has become a hot topic. The technical background and workflow of federated learning were introduced, and the sources and impacts of its communication bottlenecks were analyzed. Then, based on the factors affecting communication efficiency, existing federated learning communication optimization methods were comprehensively sorted out and analyzed from optimization objectives such as model parameter compression, model update strategies, system architecture, and communication protocols. The development trend of this research field was also presented. Finally, the problems faced by existing federated learning communication optimization methods were summarized, and future development trends and research directions were looked forward to.
ISSN:2096-109X