AFD: Defending Convolutional Neural Networks Without Using Adversarial Samples

The vulnerability of deep neural networks to adversarial attacks has attracted much research effort. Still, studies have shown that it is challenging to simultaneously achieve both strong robustness to adversarial attacks and low degradation in the performance on the original task, as there is alway...

Full description

Saved in:
Bibliographic Details
Main Authors: Nupur Thakur, Yuzhen Ding, Baoxin Li
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Open Journal of Signal Processing
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11007011/
Tags: Add Tag
No Tags, Be the first to tag this record!