Detection of Malicious Clients in Federated Learning Using Graph Neural Network

Federated Learning (FL) facilitates decentralized model training without the exchange of raw data, thereby guaranteeing privacy. However, due to its distributed nature, this paradigm is susceptible to adversarial threats such as sign-flipping attacks, in which malicious clients reverse model paramet...

Full description

Saved in:
Bibliographic Details
Main Authors: Anee Sharma, Ningrinla Marchang
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10980311/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Federated Learning (FL) facilitates decentralized model training without the exchange of raw data, thereby guaranteeing privacy. However, due to its distributed nature, this paradigm is susceptible to adversarial threats such as sign-flipping attacks, in which malicious clients reverse model parameter signs in order to poison the global aggregation process. This study introduces a detection framework that is graph-based and leverages Graph Attention Networks (GATs) to overcome these challenges. The framework detects malicious clients with high accuracy by representing FL local models as directed graphs and capturing layer-wise statistical features. The efficacy of the approach is demonstrated by extensive experiments on the FEMNIST dataset, which simulate varying attacker percentages (15%, 35%) and attack probabilities (0.5, 0.7, 1.0). The GAT model obtains a 100% detection rate with zero false positives within an optimal threshold range of 0.5–0.9, as demonstrated by the results. Furthermore, isolating detected attackers during targeted rounds (20-60) substantially maintains FL global model performance, thereby mitigating the cascading effects of poisoned updates and ensuring system stability. This work offers a practicable, scalable, and robust solution to improve the security of FL systems against adversarial behaviors.
ISSN:2169-3536