A defense method against multi-label poisoning attacks in federated learning

Abstract Federated learning is a distributed machine learning framework that allows multiple parties to collaboratively train models without sharing raw data. While it enhances data privacy, it is vulnerable to malicious attacks, especially data poisoning attacks like label flipping. Traditional def...

Full description

Saved in:
Bibliographic Details
Main Authors: Wei Ma, Qihang Zhao, Wenjun Tian
Format: Article
Language:English
Published: Nature Portfolio 2025-07-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-025-09672-x
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Federated learning is a distributed machine learning framework that allows multiple parties to collaboratively train models without sharing raw data. While it enhances data privacy, it is vulnerable to malicious attacks, especially data poisoning attacks like label flipping. Traditional defense mechanisms perform poorly against these complex and diverse attacks, particularly multi-label flipping attacks. In this paper, we propose a defense method against multi-label flipping attacks. The proposed method extracts gradients from the neurons in the output layer and applies clustering analysis to distinguish between benign and malicious participants with combinations of metrics. It can effectively identifies and filters out malicious updates, demonstrating strong robustness in defending against multi-label flipping attacks. Experimental results show that this method outperforms existing defenses in terms of both accuracy and robustness across multiple datasets, including MNIST, FashionMNIST, NSL-KDD, and CICIDS-2017, especially when faced with a high proportion of attackers and varied attack scenarios.
ISSN:2045-2322