An Adversarial Attack via Penalty Method
Deep learning systems have achieved significant success across various machine learning tasks. However, they are highly vulnerable to attacks. For example, adversarial examples can fool deep learning systems easily by perturbing inputs with small, imperceptible noises. There has been extensive resea...
Saved in:
Main Authors: | Jiyuan Sun, Haibo Yu, Jianjun Zhao |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10839396/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Enhancing adversarial transferability with local transformation
by: Yang Zhang, et al.
Published: (2024-11-01) -
Dual-Targeted adversarial example in evasion attack on graph neural networks
by: Hyun Kwon, et al.
Published: (2025-01-01) -
Targeted Discrepancy Attacks: Crafting Selective Adversarial Examples in Graph Neural Networks
by: Hyun Kwon, et al.
Published: (2025-01-01) -
Mape: defending against transferable adversarial attacks using multi-source adversarial perturbations elimination
by: Xinlei Liu, et al.
Published: (2025-01-01) -
APDL: an adaptive step size method for white-box adversarial attacks
by: Jiale Hu, et al.
Published: (2025-01-01)