Mape: defending against transferable adversarial attacks using multi-source adversarial perturbations elimination
Abstract Neural networks are vulnerable to meticulously crafted adversarial examples, leading to high-confidence misclassifications in image classification tasks. Due to their consistency with regular input patterns and the absence of reliance on the target model and its output information, transfer...
Saved in:
Main Authors: | Xinlei Liu, Jichao Xie, Tao Hu, Peng Yi, Yuxiang Hu, Shumin Huo, Zhen Zhang |
---|---|
Format: | Article |
Language: | English |
Published: |
Springer
2025-01-01
|
Series: | Complex & Intelligent Systems |
Subjects: | |
Online Access: | https://doi.org/10.1007/s40747-024-01770-z |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
An Adversarial Attack via Penalty Method
by: Jiyuan Sun, et al.
Published: (2025-01-01) -
Enhancing adversarial transferability with local transformation
by: Yang Zhang, et al.
Published: (2024-11-01) -
Transferability analysis of adversarial attacks on gender classification to face recognition: Fixed and variable attack perturbation
by: Zohra Rezgui, et al.
Published: (2022-09-01) -
SURVEY AND PROPOSED METHOD TO DETECT ADVERSARIAL EXAMPLES USING AN ADVERSARIAL RETRAINING MODEL
by: Thanh Son Phan, et al.
Published: (2024-08-01) -
Dual-Targeted adversarial example in evasion attack on graph neural networks
by: Hyun Kwon, et al.
Published: (2025-01-01)