Defending Deep Neural Networks Against Backdoor Attack by Using De-Trigger Autoencoder
A backdoor attack is a method that causes misrecognition in a deep neural network by training it on additional data that have a specific trigger. The network will correctly recognize normal samples (which lack the specific trigger) as their proper classes but will misrecognize backdoor samples (whic...
Saved in:
Main Author: | Hyun Kwon |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9579062/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Backdoor defense method in federated learning based on contrastive training
by: Jiale ZHANG, et al.
Published: (2024-03-01) -
CLB-Defense: based on contrastive learning defense for graph neural network against backdoor attack
by: Jinyin CHEN, et al.
Published: (2023-04-01) -
Efficient Method for Robust Backdoor Detection and Removal in Feature Space Using Clean Data
by: Donik Vrsnak, et al.
Published: (2025-01-01) -
FLARE: A Backdoor Attack to Federated Learning with Refined Evasion
by: Qingya Wang, et al.
Published: (2024-11-01) -
Backdoor Attack Against Dataset Distillation in Natural Language Processing
by: Yuhao Chen, et al.
Published: (2024-12-01)