GuardianMPC: Backdoor-Resilient Neural Network Computation
The rapid growth of deep learning (DL) has raised serious concerns about users’ data and neural network (NN) models’ security and privacy, particularly the risk of backdoor insertion when outsourcing the training or employing pre-trained models. To ensure resilience against suc...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10836681/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832592962285993984 |
---|---|
author | Mohammad Hashemi Domenic Forte Fatemeh Ganji |
author_facet | Mohammad Hashemi Domenic Forte Fatemeh Ganji |
author_sort | Mohammad Hashemi |
collection | DOAJ |
description | The rapid growth of deep learning (DL) has raised serious concerns about users’ data and neural network (NN) models’ security and privacy, particularly the risk of backdoor insertion when outsourcing the training or employing pre-trained models. To ensure resilience against such backdoor attacks, this work presents GuardianMPC, a novel framework leveraging secure multiparty computation (MPC). GuardianMPC is built upon garbled circuits (GC) within the LEGO protocol framework to accelerate oblivious inference on FPGAs in the presence of malicious adversaries that can manipulate the model weights and/or insert a backdoor in the architecture of a pre-trained model. In this regard, GuardianMPC is the first to offer private function evaluation in the LEGO family. GuardianMPC also supports private training to effectively counter backdoor attacks targeting NN model architectures and parameters. With optimized pre-processing, GuardianMPC significantly accelerates the online phase, achieving up to <inline-formula> <tex-math notation="LaTeX">$13.44\times $ </tex-math></inline-formula> faster computation than its software counterparts. Our experimental results for multilayer perceptrons (MLPs) and convolutional neural networks (CNNs) assess GuardianMPC’s time complexity and scalability across diverse NN model architectures. Interestingly, GuardianMPC does not adversely affect the training accuracy, as opposed to many existing private training frameworks. These results confirm GuardianMPC as a high-performance, model-agnostic solution for secure NN computation with robust security and privacy guarantees. |
format | Article |
id | doaj-art-d2afed845e314ec9b7693b1553488efb |
institution | Kabale University |
issn | 2169-3536 |
language | English |
publishDate | 2025-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj-art-d2afed845e314ec9b7693b1553488efb2025-01-21T00:00:59ZengIEEEIEEE Access2169-35362025-01-0113110291104810.1109/ACCESS.2025.352830410836681GuardianMPC: Backdoor-Resilient Neural Network ComputationMohammad Hashemi0https://orcid.org/0000-0002-1216-1552Domenic Forte1https://orcid.org/0000-0002-2794-7320Fatemeh Ganji2https://orcid.org/0000-0003-0151-1307Electrical and Computer Engineering Department, Worcester Polytechnic Institute, Worcester, MA, USAElectrical and Computer Engineering Department, University of Florida, Gainesville, FL, USAElectrical and Computer Engineering Department, Worcester Polytechnic Institute, Worcester, MA, USAThe rapid growth of deep learning (DL) has raised serious concerns about users’ data and neural network (NN) models’ security and privacy, particularly the risk of backdoor insertion when outsourcing the training or employing pre-trained models. To ensure resilience against such backdoor attacks, this work presents GuardianMPC, a novel framework leveraging secure multiparty computation (MPC). GuardianMPC is built upon garbled circuits (GC) within the LEGO protocol framework to accelerate oblivious inference on FPGAs in the presence of malicious adversaries that can manipulate the model weights and/or insert a backdoor in the architecture of a pre-trained model. In this regard, GuardianMPC is the first to offer private function evaluation in the LEGO family. GuardianMPC also supports private training to effectively counter backdoor attacks targeting NN model architectures and parameters. With optimized pre-processing, GuardianMPC significantly accelerates the online phase, achieving up to <inline-formula> <tex-math notation="LaTeX">$13.44\times $ </tex-math></inline-formula> faster computation than its software counterparts. Our experimental results for multilayer perceptrons (MLPs) and convolutional neural networks (CNNs) assess GuardianMPC’s time complexity and scalability across diverse NN model architectures. Interestingly, GuardianMPC does not adversely affect the training accuracy, as opposed to many existing private training frameworks. These results confirm GuardianMPC as a high-performance, model-agnostic solution for secure NN computation with robust security and privacy guarantees.https://ieeexplore.ieee.org/document/10836681/Backdoor insertionmalicious adversaryneural networksmultiparty computationsecure and private function evaluationprivate training |
spellingShingle | Mohammad Hashemi Domenic Forte Fatemeh Ganji GuardianMPC: Backdoor-Resilient Neural Network Computation IEEE Access Backdoor insertion malicious adversary neural networks multiparty computation secure and private function evaluation private training |
title | GuardianMPC: Backdoor-Resilient Neural Network Computation |
title_full | GuardianMPC: Backdoor-Resilient Neural Network Computation |
title_fullStr | GuardianMPC: Backdoor-Resilient Neural Network Computation |
title_full_unstemmed | GuardianMPC: Backdoor-Resilient Neural Network Computation |
title_short | GuardianMPC: Backdoor-Resilient Neural Network Computation |
title_sort | guardianmpc backdoor resilient neural network computation |
topic | Backdoor insertion malicious adversary neural networks multiparty computation secure and private function evaluation private training |
url | https://ieeexplore.ieee.org/document/10836681/ |
work_keys_str_mv | AT mohammadhashemi guardianmpcbackdoorresilientneuralnetworkcomputation AT domenicforte guardianmpcbackdoorresilientneuralnetworkcomputation AT fatemehganji guardianmpcbackdoorresilientneuralnetworkcomputation |