FedG2L: a privacy-preserving federated learning scheme base on “G2L” against poisoning attack

Federated learning (FL) can push the limitation of “Data Island” while protecting data privacy has been a broad concern. However, the centralised FL is vulnerable to a single-point failure. While decentralised and tamper-proof blockchains can cope with the above issues, it is difficult to find a ben...

Full description

Saved in:
Bibliographic Details
Main Authors: Mengfan Xu, Xinghua Li
Format: Article
Language:English
Published: Taylor & Francis Group 2023-12-01
Series:Connection Science
Subjects:
Online Access:http://dx.doi.org/10.1080/09540091.2023.2197173
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Federated learning (FL) can push the limitation of “Data Island” while protecting data privacy has been a broad concern. However, the centralised FL is vulnerable to a single-point failure. While decentralised and tamper-proof blockchains can cope with the above issues, it is difficult to find a benign benchmark gradient and eliminate the poisoning attack in the later stage of global model aggregation. To address the above problems, we present a global to local based privacy-preserving federated consensus scheme against poisoning attacks (FedG2L). This scheme can effectively reduce the influence of poisoning attacks on model accuracy. In the global aggregation stage, a gradient-similarity-based secure consensus algorithm (SecPBFT) is designed to eliminate malicious gradients. During this procedure, the gradient of the data owner will not be leaked. Then, we propose an improved ACGAN algorithm to generate local data to further update the model without poisoning attacks. Finally, we theoretically prove the security and correctness of our scheme. Experimental results demonstrated that the model accuracy is improved by at least 55% than no defense scheme, and the attack success rate is reduced by more than 60%.
ISSN:0954-0091
1360-0494