Dual-Mode Method for Generating Adversarial Examples to Attack Deep Neural Networks
Deep neural networks yield desirable performance in text, image, and speech classification. However, these networks are vulnerable to adversarial examples. An adversarial example is a sample generated by inserting a small amount of noise into an original sample (with minimal distortion) such that it...
Saved in:
| Main Authors: | Hyun Kwon, Sunghwan Kim |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10046665/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Tailoring adversarial attacks on deep neural networks for targeted class manipulation using DeepFool algorithm
by: S. M. Fazle Rabby Labib, et al.
Published: (2025-03-01) -
An Adversarial Attack via Penalty Method
by: Jiyuan Sun, et al.
Published: (2025-01-01) -
Graph-Level Label-Only Membership Inference Attack Against Graph Neural Networks
by: Jiazhu Dai, et al.
Published: (2025-05-01) -
MeetSafe: enhancing robustness against white-box adversarial examples
by: Ruben Stenhuis, et al.
Published: (2025-08-01) -
VG-CGARN: Video Generation Using Convolutional Generative Adversarial and Recurrent Networks
by: Fatemeh Sobhani Manesh, et al.
Published: (2025-04-01)