Explainable AI for Forensic Analysis: A Comparative Study of SHAP and LIME in Intrusion Detection Models
The lack of interpretability in AI-based intrusion detection systems poses a critical barrier to their adoption in forensic cybersecurity, which demands high levels of reliability and verifiable evidence. To address this challenge, the integration of explainable artificial intelligence (XAI) into fo...
Saved in:
| Main Authors: | Pamela Hermosilla, Sebastián Berríos, Héctor Allende-Cid |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-06-01
|
| Series: | Applied Sciences |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2076-3417/15/13/7329 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Use of Explainable Artificial Intelligence for Analyzing and Explaining Intrusion Detection Systems
by: Pamela Hermosilla, et al.
Published: (2025-04-01) -
Federated XAI IDS: An Explainable and Safeguarding Privacy Approach to Detect Intrusion Combining Federated Learning and SHAP
by: Kazi Fatema, et al.
Published: (2025-05-01) -
An Explainable LSTM-Based Intrusion Detection System Optimized by Firefly Algorithm for IoT Networks
by: Taiwo Blessing Ogunseyi, et al.
Published: (2025-04-01) -
Strategies for applying interpretable and explainable AI in real world IoT applications
by: Anber Abraheem Shlash Mohammad, et al.
Published: (2025-06-01) -
Evaluation of Similarity of Image Explanations Produced by SHAP, LIME and Grad-CAM
by: Vladyslav Yavtukhovskyi, et al.
Published: (2025-06-01)