Localizing Adversarial Attacks To Produces More Imperceptible Noise
Adversarial attacks in machine learning traditionally focus on global perturbations to input data, yet the potential of localized adversarial noise remains underexplored. This study systematically evaluates localized adversarial attacks across widely-used methods, including FGSM, PGD, and C&W,...
Saved in:
| Main Authors: | Pavan Reddy, Aditya Sanjay Gujral |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
LibraryPress@UF
2025-05-01
|
| Series: | Proceedings of the International Florida Artificial Intelligence Research Society Conference |
| Online Access: | https://journals.flvc.org/FLAIRS/article/view/139004 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Investigating imperceptibility of adversarial attacks on tabular data: An empirical analysis
by: Zhipeng He, et al.
Published: (2025-03-01) -
Perceptual Carlini-Wagner Attack: A Robust and Imperceptible Adversarial Attack Using LPIPS
by: Liming Fan, et al.
Published: (2025-01-01) -
Breaking Machine Learning Models with Adversarial Attacks and its Variants
by: Pavan Reddy
Published: (2025-05-01) -
Improving Neural Network Efficiency Using Piecewise Linear Approximation of Activation Functions
by: Pavan Reddy, et al.
Published: (2025-05-01) -
Ghost in the Radio: An Audio Adversarial Attack Using Environmental Noise Through Radio
by: Hyeongjun Choi, et al.
Published: (2024-01-01)