Adversarial Training for Mitigating Insider-Driven XAI-Based Backdoor Attacks
The study investigates how adversarial training techniques can be used to introduce backdoors into deep learning models by an insider with privileged access to training data. The research demonstrates an insider-driven poison-label backdoor approach in which triggers are introduced into the training...
Saved in:
| Main Authors: | R. G. Gayathri, Atul Sajjanhar, Yong Xiang |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-05-01
|
| Series: | Future Internet |
| Subjects: | |
| Online Access: | https://www.mdpi.com/1999-5903/17/5/209 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
A Backdoor Approach With Inverted Labels Using Dirty Label-Flipping Attacks
by: Orson Mengara
Published: (2025-01-01) -
A4FL: Federated Adversarial Defense via Adversarial Training and Pruning Against Backdoor Attack
by: Saeed-Uz-Zaman, et al.
Published: (2025-01-01) -
A Backdoor Attack Against LSTM-Based Text Classification Systems
by: Jiazhu Dai, et al.
Published: (2019-01-01) -
FLARE: A Backdoor Attack to Federated Learning with Refined Evasion
by: Qingya Wang, et al.
Published: (2024-11-01) -
Improved Distributed Backdoor Attacks in Federated Learning by Density-Adaptive Data Poisoning and Projection-Based Gradient Updating
by: Jian Wang, et al.
Published: (2025-01-01)