A Comprehensive Review of Adversarial Attacks and Defense Strategies in Deep Neural Networks

Artificial Intelligence (AI) security research is promising and highly valuable in the current decade. In particular, deep neural network (DNN) security is receiving increased attention. Although DNNs have recently emerged as a prominent tool for addressing complex challenges across various machine...

Full description

Saved in:
Bibliographic Details
Main Authors: Abdulruhman Abomakhelb, Kamarularifin Abd Jalil, Alya Geogiana Buja, Abdulraqeb Alhammadi, Abdulmajeed M. Alenezi
Format: Article
Language:English
Published: MDPI AG 2025-05-01
Series:Technologies
Subjects:
Online Access:https://www.mdpi.com/2227-7080/13/5/202
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Artificial Intelligence (AI) security research is promising and highly valuable in the current decade. In particular, deep neural network (DNN) security is receiving increased attention. Although DNNs have recently emerged as a prominent tool for addressing complex challenges across various machine learning (ML) tasks and DNNs stand out as the most widely employed, as well as holding a significant share in both research and industry, DNNs exhibit vulnerabilities to adversarial attacks where slight but intentional perturbations can deceive DNNs models. Consequently, several studies have proposed that DNNs are exposed to new attacks. Given the increasing prevalence of these attacks, researchers need to explore countermeasures that mitigate the associated risks and enhance the reliability of adapting DNNs to various critical applications. As a result, DNNs have been protected against adversarial attacks using a variety of defense mechanisms. Our primary focus is DNN as a foundational technology across all ML tasks. In this work, we comprehensively survey and present the latest research on DNN security based on various ML tasks, highlighting the adversarial attacks that cause DNNs to fail and the defense strategies that protect the DNNs. We review, explore, and elucidate the operational mechanisms of prevailing adversarial attacks and defense mechanisms applicable to all ML tasks utilizing DNN. Our review presents a detailed taxonomy for attacker and defender problems, providing a comprehensive and robust review of most state-of-the-art attacks and defenses in recent years. Additionally, we thoroughly examine the most recent systematic review concerning the measures used to evaluate the success of attack or defense methods. Finally, we address current challenges and open issues in this field and future research directions.
ISSN:2227-7080