A decade of adversarial examples: a survey on the nature and understanding of neural network non-robustness

Adversarial examples, in the context of computer vision, are inputs deliberately crafted to deceive or mislead artificial neural networks. These examples exploit vulnerabilities in neural networks, resulting in minimal alterations to the original input that are imperceptible by humans but can signif...

Full description

Saved in:
Bibliographic Details
Main Authors: A.V. Trusov, E.E. Limonova, V.V. Arlazarov
Format: Article
Language:English
Published: Samara National Research University 2025-04-01
Series:Компьютерная оптика
Subjects:
Online Access:https://computeroptics.ru/KO/Annot/KO49-2/490209.html
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849232865090338816
author A.V. Trusov
E.E. Limonova
V.V. Arlazarov
author_facet A.V. Trusov
E.E. Limonova
V.V. Arlazarov
author_sort A.V. Trusov
collection DOAJ
description Adversarial examples, in the context of computer vision, are inputs deliberately crafted to deceive or mislead artificial neural networks. These examples exploit vulnerabilities in neural networks, resulting in minimal alterations to the original input that are imperceptible by humans but can significantly impact the network’s output. In this paper, we present a thorough survey of research on adversarial examples, with a primary focus on their impact on neural network classifiers. We closely examine the theoretical capabilities and limitations of artificial neural networks. After that, we explore the discovery and evolution of adversarial examples, starting from basic gradient-based techniques and progressing toward the recent trend of employing generative neural networks for this purpose. We discuss the limited effectiveness of existing countermeasures against adversarial examples. Furthermore, we emphasize that the adversarial examples originate the misalignment between human and neural network decision-making processes. That can be attributed to the current methodology for training neural networks. We also argue that the commonly used term “attack on neural networks” is misleading when discussing adversarial deep learning. Through this paper, our objective is to provide a comprehensive overview of adversarial examples and inspire further researchers to develop more robust neural networks. Such networks will align better with human decision-making processes and enhance the security and reliability of computer vision systems in practical applications.
format Article
id doaj-art-de74100092c84c49b0934ff28bc7a8e5
institution Kabale University
issn 0134-2452
2412-6179
language English
publishDate 2025-04-01
publisher Samara National Research University
record_format Article
series Компьютерная оптика
spelling doaj-art-de74100092c84c49b0934ff28bc7a8e52025-08-20T17:16:37ZengSamara National Research UniversityКомпьютерная оптика0134-24522412-61792025-04-0149222225210.18287/2412-6179-CO-1494A decade of adversarial examples: a survey on the nature and understanding of neural network non-robustnessA.V. Trusov0E.E. Limonova1V.V. Arlazarov2Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences; Smart Engines Service LLC; Moscow Institute of Physics and TechnologyFederal Research Center “Computer Science and Control” of the Russian Academy of Sciences; Smart Engines Service LLCFederal Research Center “Computer Science and Control” of the Russian Academy of Sciences; Smart Engines Service LLCAdversarial examples, in the context of computer vision, are inputs deliberately crafted to deceive or mislead artificial neural networks. These examples exploit vulnerabilities in neural networks, resulting in minimal alterations to the original input that are imperceptible by humans but can significantly impact the network’s output. In this paper, we present a thorough survey of research on adversarial examples, with a primary focus on their impact on neural network classifiers. We closely examine the theoretical capabilities and limitations of artificial neural networks. After that, we explore the discovery and evolution of adversarial examples, starting from basic gradient-based techniques and progressing toward the recent trend of employing generative neural networks for this purpose. We discuss the limited effectiveness of existing countermeasures against adversarial examples. Furthermore, we emphasize that the adversarial examples originate the misalignment between human and neural network decision-making processes. That can be attributed to the current methodology for training neural networks. We also argue that the commonly used term “attack on neural networks” is misleading when discussing adversarial deep learning. Through this paper, our objective is to provide a comprehensive overview of adversarial examples and inspire further researchers to develop more robust neural networks. Such networks will align better with human decision-making processes and enhance the security and reliability of computer vision systems in practical applications.https://computeroptics.ru/KO/Annot/KO49-2/490209.htmladversarial examplesadversarial deep learningneural networksneural network security
spellingShingle A.V. Trusov
E.E. Limonova
V.V. Arlazarov
A decade of adversarial examples: a survey on the nature and understanding of neural network non-robustness
Компьютерная оптика
adversarial examples
adversarial deep learning
neural networks
neural network security
title A decade of adversarial examples: a survey on the nature and understanding of neural network non-robustness
title_full A decade of adversarial examples: a survey on the nature and understanding of neural network non-robustness
title_fullStr A decade of adversarial examples: a survey on the nature and understanding of neural network non-robustness
title_full_unstemmed A decade of adversarial examples: a survey on the nature and understanding of neural network non-robustness
title_short A decade of adversarial examples: a survey on the nature and understanding of neural network non-robustness
title_sort decade of adversarial examples a survey on the nature and understanding of neural network non robustness
topic adversarial examples
adversarial deep learning
neural networks
neural network security
url https://computeroptics.ru/KO/Annot/KO49-2/490209.html
work_keys_str_mv AT avtrusov adecadeofadversarialexamplesasurveyonthenatureandunderstandingofneuralnetworknonrobustness
AT eelimonova adecadeofadversarialexamplesasurveyonthenatureandunderstandingofneuralnetworknonrobustness
AT vvarlazarov adecadeofadversarialexamplesasurveyonthenatureandunderstandingofneuralnetworknonrobustness
AT avtrusov decadeofadversarialexamplesasurveyonthenatureandunderstandingofneuralnetworknonrobustness
AT eelimonova decadeofadversarialexamplesasurveyonthenatureandunderstandingofneuralnetworknonrobustness
AT vvarlazarov decadeofadversarialexamplesasurveyonthenatureandunderstandingofneuralnetworknonrobustness