Mape: defending against transferable adversarial attacks using multi-source adversarial perturbations elimination
Abstract Neural networks are vulnerable to meticulously crafted adversarial examples, leading to high-confidence misclassifications in image classification tasks. Due to their consistency with regular input patterns and the absence of reliance on the target model and its output information, transfer...
Saved in:
Main Authors: | Xinlei Liu, Jichao Xie, Tao Hu, Peng Yi, Yuxiang Hu, Shumin Huo, Zhen Zhang |
---|---|
Format: | Article |
Language: | English |
Published: |
Springer
2025-01-01
|
Series: | Complex & Intelligent Systems |
Subjects: | |
Online Access: | https://doi.org/10.1007/s40747-024-01770-z |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
SURVEY AND PROPOSED METHOD TO DETECT ADVERSARIAL EXAMPLES USING AN ADVERSARIAL RETRAINING MODEL
by: Thanh Son Phan, et al.
Published: (2024-08-01) -
Batch-in-Batch: a new adversarial training framework for initial perturbation and sample selection
by: Yinting Wu, et al.
Published: (2025-01-01) -
Adversarial detection based on feature invariant in license plate recognition systems
by: ZHU Xiaoyu, et al.
Published: (2024-12-01) -
Stock price prediction with attentive temporal convolution-based generative adversarial network
by: Ying Liu, et al.
Published: (2025-03-01) -
Investigating the Use of Generative Adversarial Networks-Based Deep Learning for Reducing Motion Artifacts in Cardiac Magnetic Resonance
by: Ma ZP, et al.
Published: (2025-02-01)