Survey on Backdoor Attacks on Deep Learning: Current Trends, Categorization, Applications, Research Challenges, and Future Prospects
Deep Neural Networks (DNNs) have emerged as a prominent set of algorithms for complex real-world applications. However, state-of-the-art DNNs require a significant amount of data and computational resources to train and generalize well for real-world scenarios. This dependence of DNN training on a l...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11007533/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849471990670295040 |
|---|---|
| author | Muhammad Abdullah Hanif Nandish Chattopadhyay Bassem Ouni Muhammad Shafique |
| author_facet | Muhammad Abdullah Hanif Nandish Chattopadhyay Bassem Ouni Muhammad Shafique |
| author_sort | Muhammad Abdullah Hanif |
| collection | DOAJ |
| description | Deep Neural Networks (DNNs) have emerged as a prominent set of algorithms for complex real-world applications. However, state-of-the-art DNNs require a significant amount of data and computational resources to train and generalize well for real-world scenarios. This dependence of DNN training on a large amount of computational and memory resources has increased the use of Machine Learning as a Service (MLaaS) or third-party resources for training large models for complex applications. Specifically, the drift of the deep learning community towards self-supervised learning for learning better representations directly from large amounts of unlabeled data has amplified the computational and memory requirements for machine learning. On the one hand, the availability of MLaaS (or third-party resources) alleviates this issue. On the other hand, it opens up avenues for a new set of vulnerabilities, where an adversary (someone from a third party) can infect the model with malicious functionality that is triggered only with specific input patterns. Such attacks are usually referred to as Trojan or backdoor attacks and are very stealthy and hard to detect. In this paper, we highlight the complete attack surface that can be exploited to inject hidden malicious functionality (backdoors) in machine learning models. We classify the attacks into two major categories, i.e., poisoning attacks and non-poisoning attacks, and present state-of-the-art works related to each. Towards the end of the article, we highlight the limitations of existing techniques and cover some of the key challenges in developing stealthy and robust real-world backdoor attacks. |
| format | Article |
| id | doaj-art-97358c77d0234c0f9cb2f8a69d5b916c |
| institution | Kabale University |
| issn | 2169-3536 |
| language | English |
| publishDate | 2025-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Access |
| spelling | doaj-art-97358c77d0234c0f9cb2f8a69d5b916c2025-08-20T03:24:39ZengIEEEIEEE Access2169-35362025-01-0113931909322110.1109/ACCESS.2025.357199511007533Survey on Backdoor Attacks on Deep Learning: Current Trends, Categorization, Applications, Research Challenges, and Future ProspectsMuhammad Abdullah Hanif0https://orcid.org/0000-0001-9841-6132Nandish Chattopadhyay1https://orcid.org/0000-0002-1611-9378Bassem Ouni2https://orcid.org/0000-0001-6534-9295Muhammad Shafique3https://orcid.org/0000-0002-2607-8135eBrain Laboratory, Division of Engineering, New York University (NYU) Abu Dhabi, Abu Dhabi, United Arab EmirateseBrain Laboratory, Division of Engineering, New York University (NYU) Abu Dhabi, Abu Dhabi, United Arab EmiratesAI and Digital Science Research Center, Technology Innovation Institute (TII), Abu Dhabi, United Arab EmirateseBrain Laboratory, Division of Engineering, New York University (NYU) Abu Dhabi, Abu Dhabi, United Arab EmiratesDeep Neural Networks (DNNs) have emerged as a prominent set of algorithms for complex real-world applications. However, state-of-the-art DNNs require a significant amount of data and computational resources to train and generalize well for real-world scenarios. This dependence of DNN training on a large amount of computational and memory resources has increased the use of Machine Learning as a Service (MLaaS) or third-party resources for training large models for complex applications. Specifically, the drift of the deep learning community towards self-supervised learning for learning better representations directly from large amounts of unlabeled data has amplified the computational and memory requirements for machine learning. On the one hand, the availability of MLaaS (or third-party resources) alleviates this issue. On the other hand, it opens up avenues for a new set of vulnerabilities, where an adversary (someone from a third party) can infect the model with malicious functionality that is triggered only with specific input patterns. Such attacks are usually referred to as Trojan or backdoor attacks and are very stealthy and hard to detect. In this paper, we highlight the complete attack surface that can be exploited to inject hidden malicious functionality (backdoors) in machine learning models. We classify the attacks into two major categories, i.e., poisoning attacks and non-poisoning attacks, and present state-of-the-art works related to each. Towards the end of the article, we highlight the limitations of existing techniques and cover some of the key challenges in developing stealthy and robust real-world backdoor attacks.https://ieeexplore.ieee.org/document/11007533/Deep learningneural networksDNNsmachine learning (ML)backdoor attacksbackdoor defenses |
| spellingShingle | Muhammad Abdullah Hanif Nandish Chattopadhyay Bassem Ouni Muhammad Shafique Survey on Backdoor Attacks on Deep Learning: Current Trends, Categorization, Applications, Research Challenges, and Future Prospects IEEE Access Deep learning neural networks DNNs machine learning (ML) backdoor attacks backdoor defenses |
| title | Survey on Backdoor Attacks on Deep Learning: Current Trends, Categorization, Applications, Research Challenges, and Future Prospects |
| title_full | Survey on Backdoor Attacks on Deep Learning: Current Trends, Categorization, Applications, Research Challenges, and Future Prospects |
| title_fullStr | Survey on Backdoor Attacks on Deep Learning: Current Trends, Categorization, Applications, Research Challenges, and Future Prospects |
| title_full_unstemmed | Survey on Backdoor Attacks on Deep Learning: Current Trends, Categorization, Applications, Research Challenges, and Future Prospects |
| title_short | Survey on Backdoor Attacks on Deep Learning: Current Trends, Categorization, Applications, Research Challenges, and Future Prospects |
| title_sort | survey on backdoor attacks on deep learning current trends categorization applications research challenges and future prospects |
| topic | Deep learning neural networks DNNs machine learning (ML) backdoor attacks backdoor defenses |
| url | https://ieeexplore.ieee.org/document/11007533/ |
| work_keys_str_mv | AT muhammadabdullahhanif surveyonbackdoorattacksondeeplearningcurrenttrendscategorizationapplicationsresearchchallengesandfutureprospects AT nandishchattopadhyay surveyonbackdoorattacksondeeplearningcurrenttrendscategorizationapplicationsresearchchallengesandfutureprospects AT bassemouni surveyonbackdoorattacksondeeplearningcurrenttrendscategorizationapplicationsresearchchallengesandfutureprospects AT muhammadshafique surveyonbackdoorattacksondeeplearningcurrenttrendscategorizationapplicationsresearchchallengesandfutureprospects |