Backdoor Attack to Giant Model in Fragment-Sharing Federated Learning

To efficiently train the billions of parameters in a giant model, sharing the parameter-fragments within the Federated Learning (FL) framework has become a popular pattern, where each client only trains and shares a fraction of parameters, extending the training of giant models to the broader resour...

Full description

Saved in:
Bibliographic Details
Main Authors: Senmao Qi, Hao Ma, Yifei Zou, Yuan Yuan, Zhenzhen Xie, Peng Li, Xiuzhen Cheng
Format: Article
Language:English
Published: Tsinghua University Press 2024-12-01
Series:Big Data Mining and Analytics
Subjects:
Online Access:https://www.sciopen.com/article/10.26599/BDMA.2024.9020035
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849766690126036992
author Senmao Qi
Hao Ma
Yifei Zou
Yuan Yuan
Zhenzhen Xie
Peng Li
Xiuzhen Cheng
author_facet Senmao Qi
Hao Ma
Yifei Zou
Yuan Yuan
Zhenzhen Xie
Peng Li
Xiuzhen Cheng
author_sort Senmao Qi
collection DOAJ
description To efficiently train the billions of parameters in a giant model, sharing the parameter-fragments within the Federated Learning (FL) framework has become a popular pattern, where each client only trains and shares a fraction of parameters, extending the training of giant models to the broader resources-constrained scenarios. Compared with the previous works where the models are fully exchanged, the fragment-sharing pattern poses some new challenges for the backdoor attacks. In this paper, we investigate the backdoor attack on giant models when they are trained in an FL system. With the help of fine-tuning technique, a backdoor attack method is presented, by which the malicious clients can hide the backdoor in a designated fragment that is going to be shared with the benign clients. Apart from the individual backdoor attack method mentioned above, we additionally show a cooperative backdoor attack method, in which the fragment of a malicious client to be shared only contains a part of the backdoor while the backdoor is injected when the benign client receives all the fragments from the malicious clients. Obviously, the later one is more stealthy and harder to be detected. Extensive experiments have been conducted on the datasets of CIFAR-10 and CIFAR-100 with the ResNet-34 as the testing model. The numerical results show that our backdoor attack methods can achieve an attack success rate close to 100% in about 20 rounds of iterations.
format Article
id doaj-art-b261c4b64ecd4e21ae4923f6e5ccbcdc
institution DOAJ
issn 2096-0654
language English
publishDate 2024-12-01
publisher Tsinghua University Press
record_format Article
series Big Data Mining and Analytics
spelling doaj-art-b261c4b64ecd4e21ae4923f6e5ccbcdc2025-08-20T03:04:30ZengTsinghua University PressBig Data Mining and Analytics2096-06542024-12-01741084109710.26599/BDMA.2024.9020035Backdoor Attack to Giant Model in Fragment-Sharing Federated LearningSenmao Qi0Hao Ma1Yifei Zou2Yuan Yuan3Zhenzhen Xie4Peng Li5Xiuzhen Cheng6School of Computer Science and Technology, Shandong University, Qingdao 266237, ChinaSchool of Computer Science and Technology, Shandong University, Qingdao 266237, ChinaSchool of Computer Science and Technology, Shandong University, Qingdao 266237, ChinaShandong University-Nanyang Technological University International Joint Research Institute on Artificial Intelligence, Shandong University, Jinan 250101, ChinaSchool of Computer Science and Technology, Shandong University, Qingdao 266237, ChinaSchool of Computer Science and Engineering, The University of Aizu, Aizuwakamatsu 9658580, JapanSchool of Computer Science and Technology, Shandong University, Qingdao 266237, ChinaTo efficiently train the billions of parameters in a giant model, sharing the parameter-fragments within the Federated Learning (FL) framework has become a popular pattern, where each client only trains and shares a fraction of parameters, extending the training of giant models to the broader resources-constrained scenarios. Compared with the previous works where the models are fully exchanged, the fragment-sharing pattern poses some new challenges for the backdoor attacks. In this paper, we investigate the backdoor attack on giant models when they are trained in an FL system. With the help of fine-tuning technique, a backdoor attack method is presented, by which the malicious clients can hide the backdoor in a designated fragment that is going to be shared with the benign clients. Apart from the individual backdoor attack method mentioned above, we additionally show a cooperative backdoor attack method, in which the fragment of a malicious client to be shared only contains a part of the backdoor while the backdoor is injected when the benign client receives all the fragments from the malicious clients. Obviously, the later one is more stealthy and harder to be detected. Extensive experiments have been conducted on the datasets of CIFAR-10 and CIFAR-100 with the ResNet-34 as the testing model. The numerical results show that our backdoor attack methods can achieve an attack success rate close to 100% in about 20 rounds of iterations.https://www.sciopen.com/article/10.26599/BDMA.2024.9020035federated learning (fl)giant modelbackdoor attackfragment-sharing
spellingShingle Senmao Qi
Hao Ma
Yifei Zou
Yuan Yuan
Zhenzhen Xie
Peng Li
Xiuzhen Cheng
Backdoor Attack to Giant Model in Fragment-Sharing Federated Learning
Big Data Mining and Analytics
federated learning (fl)
giant model
backdoor attack
fragment-sharing
title Backdoor Attack to Giant Model in Fragment-Sharing Federated Learning
title_full Backdoor Attack to Giant Model in Fragment-Sharing Federated Learning
title_fullStr Backdoor Attack to Giant Model in Fragment-Sharing Federated Learning
title_full_unstemmed Backdoor Attack to Giant Model in Fragment-Sharing Federated Learning
title_short Backdoor Attack to Giant Model in Fragment-Sharing Federated Learning
title_sort backdoor attack to giant model in fragment sharing federated learning
topic federated learning (fl)
giant model
backdoor attack
fragment-sharing
url https://www.sciopen.com/article/10.26599/BDMA.2024.9020035
work_keys_str_mv AT senmaoqi backdoorattacktogiantmodelinfragmentsharingfederatedlearning
AT haoma backdoorattacktogiantmodelinfragmentsharingfederatedlearning
AT yifeizou backdoorattacktogiantmodelinfragmentsharingfederatedlearning
AT yuanyuan backdoorattacktogiantmodelinfragmentsharingfederatedlearning
AT zhenzhenxie backdoorattacktogiantmodelinfragmentsharingfederatedlearning
AT pengli backdoorattacktogiantmodelinfragmentsharingfederatedlearning
AT xiuzhencheng backdoorattacktogiantmodelinfragmentsharingfederatedlearning