Detection of Malicious Clients in Federated Learning Using Graph Neural Network

Federated Learning (FL) facilitates decentralized model training without the exchange of raw data, thereby guaranteeing privacy. However, due to its distributed nature, this paradigm is susceptible to adversarial threats such as sign-flipping attacks, in which malicious clients reverse model paramet...

Full description

Saved in:
Bibliographic Details
Main Authors: Anee Sharma, Ningrinla Marchang
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10980311/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850187548453765120
author Anee Sharma
Ningrinla Marchang
author_facet Anee Sharma
Ningrinla Marchang
author_sort Anee Sharma
collection DOAJ
description Federated Learning (FL) facilitates decentralized model training without the exchange of raw data, thereby guaranteeing privacy. However, due to its distributed nature, this paradigm is susceptible to adversarial threats such as sign-flipping attacks, in which malicious clients reverse model parameter signs in order to poison the global aggregation process. This study introduces a detection framework that is graph-based and leverages Graph Attention Networks (GATs) to overcome these challenges. The framework detects malicious clients with high accuracy by representing FL local models as directed graphs and capturing layer-wise statistical features. The efficacy of the approach is demonstrated by extensive experiments on the FEMNIST dataset, which simulate varying attacker percentages (15%, 35%) and attack probabilities (0.5, 0.7, 1.0). The GAT model obtains a 100% detection rate with zero false positives within an optimal threshold range of 0.5–0.9, as demonstrated by the results. Furthermore, isolating detected attackers during targeted rounds (20-60) substantially maintains FL global model performance, thereby mitigating the cascading effects of poisoned updates and ensuring system stability. This work offers a practicable, scalable, and robust solution to improve the security of FL systems against adversarial behaviors.
format Article
id doaj-art-727619c813d544ea8ef66cae1caf9cba
institution OA Journals
issn 2169-3536
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-727619c813d544ea8ef66cae1caf9cba2025-08-20T02:16:05ZengIEEEIEEE Access2169-35362025-01-0113779527797210.1109/ACCESS.2025.356571210980311Detection of Malicious Clients in Federated Learning Using Graph Neural NetworkAnee Sharma0https://orcid.org/0000-0002-2336-3694Ningrinla Marchang1https://orcid.org/0000-0003-0473-9972North Eastern Regional Institute of Science and Technology, Itanagar, Arunachal Pradesh, IndiaNorth Eastern Regional Institute of Science and Technology, Itanagar, Arunachal Pradesh, IndiaFederated Learning (FL) facilitates decentralized model training without the exchange of raw data, thereby guaranteeing privacy. However, due to its distributed nature, this paradigm is susceptible to adversarial threats such as sign-flipping attacks, in which malicious clients reverse model parameter signs in order to poison the global aggregation process. This study introduces a detection framework that is graph-based and leverages Graph Attention Networks (GATs) to overcome these challenges. The framework detects malicious clients with high accuracy by representing FL local models as directed graphs and capturing layer-wise statistical features. The efficacy of the approach is demonstrated by extensive experiments on the FEMNIST dataset, which simulate varying attacker percentages (15%, 35%) and attack probabilities (0.5, 0.7, 1.0). The GAT model obtains a 100% detection rate with zero false positives within an optimal threshold range of 0.5–0.9, as demonstrated by the results. Furthermore, isolating detected attackers during targeted rounds (20-60) substantially maintains FL global model performance, thereby mitigating the cascading effects of poisoned updates and ensuring system stability. This work offers a practicable, scalable, and robust solution to improve the security of FL systems against adversarial behaviors.https://ieeexplore.ieee.org/document/10980311/Federated learninganomaly detectiongraph neural networkssign-flipping attacksgraph attention networksmalicious clients
spellingShingle Anee Sharma
Ningrinla Marchang
Detection of Malicious Clients in Federated Learning Using Graph Neural Network
IEEE Access
Federated learning
anomaly detection
graph neural networks
sign-flipping attacks
graph attention networks
malicious clients
title Detection of Malicious Clients in Federated Learning Using Graph Neural Network
title_full Detection of Malicious Clients in Federated Learning Using Graph Neural Network
title_fullStr Detection of Malicious Clients in Federated Learning Using Graph Neural Network
title_full_unstemmed Detection of Malicious Clients in Federated Learning Using Graph Neural Network
title_short Detection of Malicious Clients in Federated Learning Using Graph Neural Network
title_sort detection of malicious clients in federated learning using graph neural network
topic Federated learning
anomaly detection
graph neural networks
sign-flipping attacks
graph attention networks
malicious clients
url https://ieeexplore.ieee.org/document/10980311/
work_keys_str_mv AT aneesharma detectionofmaliciousclientsinfederatedlearningusinggraphneuralnetwork
AT ningrinlamarchang detectionofmaliciousclientsinfederatedlearningusinggraphneuralnetwork