Randomized Purifier Based on Low Adversarial Transferability for Adversarial Defense
Deep neural networks are generally very vulnerable to adversarial attacks. In order to defend against adversarial attacks in classifiers, Adversarial Purification (AP) was developed to neutralize adversarial perturbations using a generative model at the input stage. AP has an advantage in that it ca...
Saved in:
| Main Authors: | Sangjin Park, Yoojin Jung, Byung Cheol Song |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2024-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10630788/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Detection and Defense: Student-Teacher Network for Adversarial Robustness
by: Kyoungchan Park, et al.
Published: (2024-01-01) -
Adversarial patch defense algorithm based on PatchTracker
by: Zhenjie XIAO, et al.
Published: (2024-02-01) -
A Comprehensive Review of Adversarial Attacks and Defense Strategies in Deep Neural Networks
by: Abdulruhman Abomakhelb, et al.
Published: (2025-05-01) -
A4FL: Federated Adversarial Defense via Adversarial Training and Pruning Against Backdoor Attack
by: Saeed-Uz-Zaman, et al.
Published: (2025-01-01) -
OD-SHIELD: Convolutional Autoencoder-Based Defense Against Adversarial Patch Attacks in Object Detection
by: Byeongchan Kim, et al.
Published: (2025-01-01)