Improved Distributed Backdoor Attacks in Federated Learning by Density-Adaptive Data Poisoning and Projection-Based Gradient Updating
While federated learning enables collaborative model training with preserved data locality, it remains vulnerable to evolving backdoor attacks that exploit its distributed architecture. Compared with centralized backdoor attacks, a distributed backdoor attack (DBA) poses a greater threat to FL syste...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11072142/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | While federated learning enables collaborative model training with preserved data locality, it remains vulnerable to evolving backdoor attacks that exploit its distributed architecture. Compared with centralized backdoor attacks, a distributed backdoor attack (DBA) poses a greater threat to FL system due to its spatially distributed trigger mode. Existing DBA methods are typically based on a uniform trigger design and update, which compromises the stealthiness and reduces the attack effect. This paper proposes a density-adaptive data poisoning method for backdoor attacks, which effectively evades data purification defenses by decomposing a global trigger into localized sub-triggers that adapt to the data distribution of malicious clients to maintain the attack effect without compromising attack stealthiness. To further improve attack stealthiness, we propose a constrained gradient projection method that dynamically limits the boundaries of malicious parameter updates to ensure their consistency with normal update patterns. This dual-layer approach, spanning both poison-release triggering and training parameter update control, significantly enhances the stealthiness of the attack while maintaining the attack effect. Experimental results on three benchmark datasets demonstrate superior performance. On COCO dataset, our method achieves 92.44% main task accuracy with 91.68% attack success rate, outperforming DBA by +8.04% MTA and +10.29% ASR. In addition, we also propose targeted defense strategies based on the mechanism of attack mode, which promotes the research of attack and defense methods in FL security. |
|---|---|
| ISSN: | 2169-3536 |