A defense method against multi-label poisoning attacks in federated learning
Abstract Federated learning is a distributed machine learning framework that allows multiple parties to collaboratively train models without sharing raw data. While it enhances data privacy, it is vulnerable to malicious attacks, especially data poisoning attacks like label flipping. Traditional def...
Saved in:
| Main Authors: | Wei Ma, Qihang Zhao, Wenjun Tian |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-07-01
|
| Series: | Scientific Reports |
| Online Access: | https://doi.org/10.1038/s41598-025-09672-x |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
An Optimal Two-Step Approach for Defense Against Poisoning Attacks in Federated Learning
by: Yasir Ali, et al.
Published: (2025-01-01) -
Securing federated learning: a defense strategy against targeted data poisoning attack
by: Ansam Khraisat, et al.
Published: (2025-02-01) -
A Federated Weighted Learning Algorithm Against Poisoning Attacks
by: Yafei Ning, et al.
Published: (2025-04-01) -
A Meta-Reinforcement Learning-Based Poisoning Attack Framework Against Federated Learning
by: Wei Zhou, et al.
Published: (2025-01-01) -
Analyzing the vulnerabilities in Split Federated Learning: assessing the robustness against data poisoning attacks
by: Aysha-Thahsin Zahir-Ismail, et al.
Published: (2025-08-01)