A defense method against multi-label poisoning attacks in federated learning

Abstract Federated learning is a distributed machine learning framework that allows multiple parties to collaboratively train models without sharing raw data. While it enhances data privacy, it is vulnerable to malicious attacks, especially data poisoning attacks like label flipping. Traditional def...

Full description

Saved in:
Bibliographic Details
Main Authors: Wei Ma, Qihang Zhao, Wenjun Tian
Format: Article
Language:English
Published: Nature Portfolio 2025-07-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-025-09672-x
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849235432568520704
author Wei Ma
Qihang Zhao
Wenjun Tian
author_facet Wei Ma
Qihang Zhao
Wenjun Tian
author_sort Wei Ma
collection DOAJ
description Abstract Federated learning is a distributed machine learning framework that allows multiple parties to collaboratively train models without sharing raw data. While it enhances data privacy, it is vulnerable to malicious attacks, especially data poisoning attacks like label flipping. Traditional defense mechanisms perform poorly against these complex and diverse attacks, particularly multi-label flipping attacks. In this paper, we propose a defense method against multi-label flipping attacks. The proposed method extracts gradients from the neurons in the output layer and applies clustering analysis to distinguish between benign and malicious participants with combinations of metrics. It can effectively identifies and filters out malicious updates, demonstrating strong robustness in defending against multi-label flipping attacks. Experimental results show that this method outperforms existing defenses in terms of both accuracy and robustness across multiple datasets, including MNIST, FashionMNIST, NSL-KDD, and CICIDS-2017, especially when faced with a high proportion of attackers and varied attack scenarios.
format Article
id doaj-art-0bda3ecdb35945da933d2756fc94958e
institution Kabale University
issn 2045-2322
language English
publishDate 2025-07-01
publisher Nature Portfolio
record_format Article
series Scientific Reports
spelling doaj-art-0bda3ecdb35945da933d2756fc94958e2025-08-20T04:02:46ZengNature PortfolioScientific Reports2045-23222025-07-0115111610.1038/s41598-025-09672-xA defense method against multi-label poisoning attacks in federated learningWei Ma0Qihang Zhao1Wenjun Tian2School of Information Engineering, North China University of Water Resources and Electric PowerSchool of Information Engineering, North China University of Water Resources and Electric PowerSchool of Information Engineering, North China University of Water Resources and Electric PowerAbstract Federated learning is a distributed machine learning framework that allows multiple parties to collaboratively train models without sharing raw data. While it enhances data privacy, it is vulnerable to malicious attacks, especially data poisoning attacks like label flipping. Traditional defense mechanisms perform poorly against these complex and diverse attacks, particularly multi-label flipping attacks. In this paper, we propose a defense method against multi-label flipping attacks. The proposed method extracts gradients from the neurons in the output layer and applies clustering analysis to distinguish between benign and malicious participants with combinations of metrics. It can effectively identifies and filters out malicious updates, demonstrating strong robustness in defending against multi-label flipping attacks. Experimental results show that this method outperforms existing defenses in terms of both accuracy and robustness across multiple datasets, including MNIST, FashionMNIST, NSL-KDD, and CICIDS-2017, especially when faced with a high proportion of attackers and varied attack scenarios.https://doi.org/10.1038/s41598-025-09672-x
spellingShingle Wei Ma
Qihang Zhao
Wenjun Tian
A defense method against multi-label poisoning attacks in federated learning
Scientific Reports
title A defense method against multi-label poisoning attacks in federated learning
title_full A defense method against multi-label poisoning attacks in federated learning
title_fullStr A defense method against multi-label poisoning attacks in federated learning
title_full_unstemmed A defense method against multi-label poisoning attacks in federated learning
title_short A defense method against multi-label poisoning attacks in federated learning
title_sort defense method against multi label poisoning attacks in federated learning
url https://doi.org/10.1038/s41598-025-09672-x
work_keys_str_mv AT weima adefensemethodagainstmultilabelpoisoningattacksinfederatedlearning
AT qihangzhao adefensemethodagainstmultilabelpoisoningattacksinfederatedlearning
AT wenjuntian adefensemethodagainstmultilabelpoisoningattacksinfederatedlearning
AT weima defensemethodagainstmultilabelpoisoningattacksinfederatedlearning
AT qihangzhao defensemethodagainstmultilabelpoisoningattacksinfederatedlearning
AT wenjuntian defensemethodagainstmultilabelpoisoningattacksinfederatedlearning