A Generalized Framework for Adversarial Attack Detection and Prevention Using Grad-CAM and Clustering Techniques
Through advances in AI-based computer vision technology, the performance of modern image classification models has surpassed human perception, making them valuable in various fields. However, adversarial attacks, which involve small changes to images that are hard for humans to perceive, can cause c...
Saved in:
| Main Authors: | Jeong-Hyun Sim, Hyun-Min Song |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-01-01
|
| Series: | Systems |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2079-8954/13/2/88 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
A Comprehensive Review of Adversarial Attacks and Defense Strategies in Deep Neural Networks
by: Abdulruhman Abomakhelb, et al.
Published: (2025-05-01) -
Adversarial Training for Mitigating Insider-Driven XAI-Based Backdoor Attacks
by: R. G. Gayathri, et al.
Published: (2025-05-01) -
A Survey on Adversarial Attacks for Malware Analysis
by: Kshitiz Aryal, et al.
Published: (2025-01-01) -
Evaluation of Similarity of Image Explanations Produced by SHAP, LIME and Grad-CAM
by: Vladyslav Yavtukhovskyi, et al.
Published: (2025-06-01) -
Evaluating Impact of Image Transformations on Adversarial Examples
by: Pu Tian, et al.
Published: (2024-01-01)