Breaking Machine Learning Models with Adversarial Attacks and its Variants
Machine learning models can be by adversarial attacks, subtle, imperceptible perturbations to inputs that cause the model to produce erroneous outputs. This tutorial introduces adversarial examples and its variants, explaining why even stateof-the-art models are vulnerable and how this impacts secu...
Saved in:
| Main Author: | Pavan Reddy |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
LibraryPress@UF
2025-05-01
|
| Series: | Proceedings of the International Florida Artificial Intelligence Research Society Conference |
| Online Access: | https://journals.flvc.org/FLAIRS/article/view/139042 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Localizing Adversarial Attacks To Produces More Imperceptible Noise
by: Pavan Reddy, et al.
Published: (2025-05-01) -
Breaking and Healing: GAN-Based Adversarial Attacks and Post-Adversarial Recovery for 5G IDSs
by: Yasmeen Alslman, et al.
Published: (2025-01-01) -
Stealthy Adversarial Attacks on Machine Learning-Based Classifiers of Wireless Signals
by: Wenhan Zhang, et al.
Published: (2024-01-01) -
Incremental Adversarial Learning for Polymorphic Attack Detection
by: Ulya Sabeel, et al.
Published: (2024-01-01) -
Point Cloud Adversarial Perturbation Generation for Adversarial Attacks
by: Fengmei He, et al.
Published: (2023-01-01)