A systematic review on the integration of explainable artificial intelligence in intrusion detection systems to enhancing transparency and interpretability in cybersecurity

The rise of sophisticated cyber threats has spurred advancements in Intrusion Detection Systems (IDS), which are crucial for identifying and mitigating security breaches in real-time. Traditional IDS often rely on complex machine learning algorithms that lack transparency despite their high accuracy...

Full description

Saved in:
Bibliographic Details
Main Authors: Vincent Zibi Mohale, Ibidun Christiana Obagbuwa
Format: Article
Language:English
Published: Frontiers Media S.A. 2025-01-01
Series:Frontiers in Artificial Intelligence
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/frai.2025.1526221/full
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850085556195688448
author Vincent Zibi Mohale
Ibidun Christiana Obagbuwa
author_facet Vincent Zibi Mohale
Ibidun Christiana Obagbuwa
author_sort Vincent Zibi Mohale
collection DOAJ
description The rise of sophisticated cyber threats has spurred advancements in Intrusion Detection Systems (IDS), which are crucial for identifying and mitigating security breaches in real-time. Traditional IDS often rely on complex machine learning algorithms that lack transparency despite their high accuracy, creating a “black box” effect that can hinder the analysts’ understanding of their decision-making processes. Explainable Artificial Intelligence (XAI) offers a promising solution by providing interpretability and transparency, enabling security professionals to understand better, trust, and optimize IDS models. This paper presents a systematic review of the integration of XAI in IDS, focusing on enhancing transparency and interpretability in cybersecurity. Through a comprehensive analysis of recent studies, this review identifies commonly used XAI techniques, evaluates their effectiveness within IDS frameworks, and examines their benefits and limitations. Findings indicate that rule-based and tree-based XAI models are preferred for their interpretability, though trade-offs with detection accuracy remain challenging. Furthermore, the review highlights critical gaps in standardization and scalability, emphasizing the need for hybrid models and real-time explainability. The paper concludes with recommendations for future research directions, suggesting improvements in XAI techniques tailored for IDS, standardized evaluation metrics, and ethical frameworks prioritizing security and transparency. This review aims to inform researchers and practitioners about current trends and future opportunities in leveraging XAI to enhance IDS effectiveness, fostering a more transparent and resilient cybersecurity landscape.
format Article
id doaj-art-082a3ba858d3412aaada7c62933ed7f0
institution DOAJ
issn 2624-8212
language English
publishDate 2025-01-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Artificial Intelligence
spelling doaj-art-082a3ba858d3412aaada7c62933ed7f02025-08-20T02:43:42ZengFrontiers Media S.A.Frontiers in Artificial Intelligence2624-82122025-01-01810.3389/frai.2025.15262211526221A systematic review on the integration of explainable artificial intelligence in intrusion detection systems to enhancing transparency and interpretability in cybersecurityVincent Zibi MohaleIbidun Christiana ObagbuwaThe rise of sophisticated cyber threats has spurred advancements in Intrusion Detection Systems (IDS), which are crucial for identifying and mitigating security breaches in real-time. Traditional IDS often rely on complex machine learning algorithms that lack transparency despite their high accuracy, creating a “black box” effect that can hinder the analysts’ understanding of their decision-making processes. Explainable Artificial Intelligence (XAI) offers a promising solution by providing interpretability and transparency, enabling security professionals to understand better, trust, and optimize IDS models. This paper presents a systematic review of the integration of XAI in IDS, focusing on enhancing transparency and interpretability in cybersecurity. Through a comprehensive analysis of recent studies, this review identifies commonly used XAI techniques, evaluates their effectiveness within IDS frameworks, and examines their benefits and limitations. Findings indicate that rule-based and tree-based XAI models are preferred for their interpretability, though trade-offs with detection accuracy remain challenging. Furthermore, the review highlights critical gaps in standardization and scalability, emphasizing the need for hybrid models and real-time explainability. The paper concludes with recommendations for future research directions, suggesting improvements in XAI techniques tailored for IDS, standardized evaluation metrics, and ethical frameworks prioritizing security and transparency. This review aims to inform researchers and practitioners about current trends and future opportunities in leveraging XAI to enhance IDS effectiveness, fostering a more transparent and resilient cybersecurity landscape.https://www.frontiersin.org/articles/10.3389/frai.2025.1526221/fullintrusion detection systemscyber threatsexplainable artificial intelligencesystematic reviewmodel explainabilitymodel interpretability
spellingShingle Vincent Zibi Mohale
Ibidun Christiana Obagbuwa
A systematic review on the integration of explainable artificial intelligence in intrusion detection systems to enhancing transparency and interpretability in cybersecurity
Frontiers in Artificial Intelligence
intrusion detection systems
cyber threats
explainable artificial intelligence
systematic review
model explainability
model interpretability
title A systematic review on the integration of explainable artificial intelligence in intrusion detection systems to enhancing transparency and interpretability in cybersecurity
title_full A systematic review on the integration of explainable artificial intelligence in intrusion detection systems to enhancing transparency and interpretability in cybersecurity
title_fullStr A systematic review on the integration of explainable artificial intelligence in intrusion detection systems to enhancing transparency and interpretability in cybersecurity
title_full_unstemmed A systematic review on the integration of explainable artificial intelligence in intrusion detection systems to enhancing transparency and interpretability in cybersecurity
title_short A systematic review on the integration of explainable artificial intelligence in intrusion detection systems to enhancing transparency and interpretability in cybersecurity
title_sort systematic review on the integration of explainable artificial intelligence in intrusion detection systems to enhancing transparency and interpretability in cybersecurity
topic intrusion detection systems
cyber threats
explainable artificial intelligence
systematic review
model explainability
model interpretability
url https://www.frontiersin.org/articles/10.3389/frai.2025.1526221/full
work_keys_str_mv AT vincentzibimohale asystematicreviewontheintegrationofexplainableartificialintelligenceinintrusiondetectionsystemstoenhancingtransparencyandinterpretabilityincybersecurity
AT ibidunchristianaobagbuwa asystematicreviewontheintegrationofexplainableartificialintelligenceinintrusiondetectionsystemstoenhancingtransparencyandinterpretabilityincybersecurity
AT vincentzibimohale systematicreviewontheintegrationofexplainableartificialintelligenceinintrusiondetectionsystemstoenhancingtransparencyandinterpretabilityincybersecurity
AT ibidunchristianaobagbuwa systematicreviewontheintegrationofexplainableartificialintelligenceinintrusiondetectionsystemstoenhancingtransparencyandinterpretabilityincybersecurity