FedG2L: a privacy-preserving federated learning scheme base on “G2L” against poisoning attack

Federated learning (FL) can push the limitation of “Data Island” while protecting data privacy has been a broad concern. However, the centralised FL is vulnerable to a single-point failure. While decentralised and tamper-proof blockchains can cope with the above issues, it is difficult to find a ben...

Full description

Saved in:
Bibliographic Details
Main Authors: Mengfan Xu, Xinghua Li
Format: Article
Language:English
Published: Taylor & Francis Group 2023-12-01
Series:Connection Science
Subjects:
Online Access:http://dx.doi.org/10.1080/09540091.2023.2197173
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849405795262791680
author Mengfan Xu
Xinghua Li
author_facet Mengfan Xu
Xinghua Li
author_sort Mengfan Xu
collection DOAJ
description Federated learning (FL) can push the limitation of “Data Island” while protecting data privacy has been a broad concern. However, the centralised FL is vulnerable to a single-point failure. While decentralised and tamper-proof blockchains can cope with the above issues, it is difficult to find a benign benchmark gradient and eliminate the poisoning attack in the later stage of global model aggregation. To address the above problems, we present a global to local based privacy-preserving federated consensus scheme against poisoning attacks (FedG2L). This scheme can effectively reduce the influence of poisoning attacks on model accuracy. In the global aggregation stage, a gradient-similarity-based secure consensus algorithm (SecPBFT) is designed to eliminate malicious gradients. During this procedure, the gradient of the data owner will not be leaked. Then, we propose an improved ACGAN algorithm to generate local data to further update the model without poisoning attacks. Finally, we theoretically prove the security and correctness of our scheme. Experimental results demonstrated that the model accuracy is improved by at least 55% than no defense scheme, and the attack success rate is reduced by more than 60%.
format Article
id doaj-art-530177bad03b41febfa745dae2af24d8
institution Kabale University
issn 0954-0091
1360-0494
language English
publishDate 2023-12-01
publisher Taylor & Francis Group
record_format Article
series Connection Science
spelling doaj-art-530177bad03b41febfa745dae2af24d82025-08-20T03:36:34ZengTaylor & Francis GroupConnection Science0954-00911360-04942023-12-0135110.1080/09540091.2023.21971732197173FedG2L: a privacy-preserving federated learning scheme base on “G2L” against poisoning attackMengfan Xu0Xinghua Li1Shaanxi Normal UniversityXidian UniversityFederated learning (FL) can push the limitation of “Data Island” while protecting data privacy has been a broad concern. However, the centralised FL is vulnerable to a single-point failure. While decentralised and tamper-proof blockchains can cope with the above issues, it is difficult to find a benign benchmark gradient and eliminate the poisoning attack in the later stage of global model aggregation. To address the above problems, we present a global to local based privacy-preserving federated consensus scheme against poisoning attacks (FedG2L). This scheme can effectively reduce the influence of poisoning attacks on model accuracy. In the global aggregation stage, a gradient-similarity-based secure consensus algorithm (SecPBFT) is designed to eliminate malicious gradients. During this procedure, the gradient of the data owner will not be leaked. Then, we propose an improved ACGAN algorithm to generate local data to further update the model without poisoning attacks. Finally, we theoretically prove the security and correctness of our scheme. Experimental results demonstrated that the model accuracy is improved by at least 55% than no defense scheme, and the attack success rate is reduced by more than 60%.http://dx.doi.org/10.1080/09540091.2023.2197173privacy-preservingfederated learningpoisoning attacksblockchaingenerative adversarial networks
spellingShingle Mengfan Xu
Xinghua Li
FedG2L: a privacy-preserving federated learning scheme base on “G2L” against poisoning attack
Connection Science
privacy-preserving
federated learning
poisoning attacks
blockchain
generative adversarial networks
title FedG2L: a privacy-preserving federated learning scheme base on “G2L” against poisoning attack
title_full FedG2L: a privacy-preserving federated learning scheme base on “G2L” against poisoning attack
title_fullStr FedG2L: a privacy-preserving federated learning scheme base on “G2L” against poisoning attack
title_full_unstemmed FedG2L: a privacy-preserving federated learning scheme base on “G2L” against poisoning attack
title_short FedG2L: a privacy-preserving federated learning scheme base on “G2L” against poisoning attack
title_sort fedg2l a privacy preserving federated learning scheme base on g2l against poisoning attack
topic privacy-preserving
federated learning
poisoning attacks
blockchain
generative adversarial networks
url http://dx.doi.org/10.1080/09540091.2023.2197173
work_keys_str_mv AT mengfanxu fedg2laprivacypreservingfederatedlearningschemebaseong2lagainstpoisoningattack
AT xinghuali fedg2laprivacypreservingfederatedlearningschemebaseong2lagainstpoisoningattack