Enhancing Byzantine robustness of federated learning via tripartite adaptive authentication

Abstract Federated learning (FL) is a distributed learning paradigm that enables model training while protecting user privacy. However, frequent communication between the server and clients also provides opportunities for attackers to intercept or tamper with parameters, thereby affecting the global...

Full description

Saved in:
Bibliographic Details
Main Authors: Xiaomeng Li, Yanjun Li, Hui Wan, Cong Wang
Format: Article
Language:English
Published: SpringerOpen 2025-05-01
Series:Journal of Big Data
Subjects:
Online Access:https://doi.org/10.1186/s40537-025-01165-y
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Federated learning (FL) is a distributed learning paradigm that enables model training while protecting user privacy. However, frequent communication between the server and clients also provides opportunities for attackers to intercept or tamper with parameters, thereby affecting the global model’s performance. To enhance the robustness of FL against attackers, we propose a framework called Byzantine-robust federated learning by adaptive tripartite authentication (BRFLATA). Specifically, BRFLATA consists of four modules: (1) adaptive client matching mechanism, (2) client authentication, (3) reliable communication link, and (4) global model update through an incentive mechanism. Through these dedicated settings, BRFLATA can authenticate each client, detect potential Byzantine clients and link attackers, and mitigate their impact on the global model’s performance by adjusting the clients’ weights during global model aggregation. We have validated the effectiveness of our proposed method through extensive experiments on widely used datasets across multiple scenarios, comparing it with state-of-the-art methods.
ISSN:2196-1115