A systematic review on the integration of explainable artificial intelligence in intrusion detection systems to enhancing transparency and interpretability in cybersecurity
The rise of sophisticated cyber threats has spurred advancements in Intrusion Detection Systems (IDS), which are crucial for identifying and mitigating security breaches in real-time. Traditional IDS often rely on complex machine learning algorithms that lack transparency despite their high accuracy...
Saved in:
| Main Authors: | Vincent Zibi Mohale, Ibidun Christiana Obagbuwa |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Frontiers Media S.A.
2025-01-01
|
| Series: | Frontiers in Artificial Intelligence |
| Subjects: | |
| Online Access: | https://www.frontiersin.org/articles/10.3389/frai.2025.1526221/full |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Evaluating machine learning-based intrusion detection systems with explainable AI: enhancing transparency and interpretability
by: Vincent Zibi Mohale, et al.
Published: (2025-05-01) -
A Comprehensive Survey of Explainable Artificial Intelligence Techniques for Malicious Insider Threat Detection
by: Khuloud Saeed Alketbi, et al.
Published: (2025-01-01) -
Explainability and Interpretability in Concept and Data Drift: A Systematic Literature Review
by: Daniele Pelosi, et al.
Published: (2025-07-01) -
Federated XAI IDS: An Explainable and Safeguarding Privacy Approach to Detect Intrusion Combining Federated Learning and SHAP
by: Kazi Fatema, et al.
Published: (2025-05-01) -
Pre Hoc and Co Hoc Explainability: Frameworks for Integrating Interpretability into Machine Learning Training for Enhanced Transparency and Performance
by: Cagla Acun, et al.
Published: (2025-07-01)