Breaking Machine Learning Models with Adversarial Attacks and its Variants
Machine learning models can be by adversarial attacks, subtle, imperceptible perturbations to inputs that cause the model to produce erroneous outputs. This tutorial introduces adversarial examples and its variants, explaining why even stateof-the-art models are vulnerable and how this impacts secu...
Saved in:
| Main Author: | |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
LibraryPress@UF
2025-05-01
|
| Series: | Proceedings of the International Florida Artificial Intelligence Research Society Conference |
| Online Access: | https://journals.flvc.org/FLAIRS/article/view/139042 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Machine learning models can be by adversarial attacks, subtle, imperceptible perturbations to inputs that cause the model to produce erroneous outputs. This tutorial introduces adversarial examples and its variants, explaining why even stateof-the-art models are vulnerable and how this impacts security in AI. It provides an overview of key concepts (such as black-box vs. white-box attack scenarios) and survey common attack techniques and defensive strategies. A hands-on component using Google Colab and the open-source Adversarial Lab toolkit allows attendees to craft adversarial examples and test model robustness in real time. Throughout, we emphasize both the practical skills and the ethical considerations needed to apply adversarial machine learning in a responsible
manner. Attendees will gain a comprehensive foundation
in adversarial attacks and insights into building more
robust, secure machine learning models.
|
|---|---|
| ISSN: | 2334-0754 2334-0762 |