Explainable AI for Forensic Analysis: A Comparative Study of SHAP and LIME in Intrusion Detection Models
The lack of interpretability in AI-based intrusion detection systems poses a critical barrier to their adoption in forensic cybersecurity, which demands high levels of reliability and verifiable evidence. To address this challenge, the integration of explainable artificial intelligence (XAI) into fo...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-06-01
|
| Series: | Applied Sciences |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2076-3417/15/13/7329 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849429105401921536 |
|---|---|
| author | Pamela Hermosilla Sebastián Berríos Héctor Allende-Cid |
| author_facet | Pamela Hermosilla Sebastián Berríos Héctor Allende-Cid |
| author_sort | Pamela Hermosilla |
| collection | DOAJ |
| description | The lack of interpretability in AI-based intrusion detection systems poses a critical barrier to their adoption in forensic cybersecurity, which demands high levels of reliability and verifiable evidence. To address this challenge, the integration of explainable artificial intelligence (XAI) into forensic cybersecurity offers a powerful approach to enhancing transparency, trust, and legal defensibility in network intrusion detection. This study presents a comparative analysis of SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) applied to Extreme Gradient Boosting (XGBoost) and Attentive Interpretable Tabular Learning (TabNet), using the UNSW-NB15 dataset. XGBoost achieved 97.8% validation accuracy and outperformed TabNet in explanation stability and global coherence. In addition to classification performance, we evaluate the fidelity, consistency, and forensic relevance of the explanations. The results confirm the complementary strengths of SHAP and LIME, supporting their combined use in building transparent, auditable, and trustworthy AI systems in digital forensic applications. |
| format | Article |
| id | doaj-art-e3847b087daf4333a3928bba49d92472 |
| institution | Kabale University |
| issn | 2076-3417 |
| language | English |
| publishDate | 2025-06-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Applied Sciences |
| spelling | doaj-art-e3847b087daf4333a3928bba49d924722025-08-20T03:28:28ZengMDPI AGApplied Sciences2076-34172025-06-011513732910.3390/app15137329Explainable AI for Forensic Analysis: A Comparative Study of SHAP and LIME in Intrusion Detection ModelsPamela Hermosilla0Sebastián Berríos1Héctor Allende-Cid2Escuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Valparaíso 2362807, ChileEscuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Valparaíso 2362807, ChileEscuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Valparaíso 2362807, ChileThe lack of interpretability in AI-based intrusion detection systems poses a critical barrier to their adoption in forensic cybersecurity, which demands high levels of reliability and verifiable evidence. To address this challenge, the integration of explainable artificial intelligence (XAI) into forensic cybersecurity offers a powerful approach to enhancing transparency, trust, and legal defensibility in network intrusion detection. This study presents a comparative analysis of SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) applied to Extreme Gradient Boosting (XGBoost) and Attentive Interpretable Tabular Learning (TabNet), using the UNSW-NB15 dataset. XGBoost achieved 97.8% validation accuracy and outperformed TabNet in explanation stability and global coherence. In addition to classification performance, we evaluate the fidelity, consistency, and forensic relevance of the explanations. The results confirm the complementary strengths of SHAP and LIME, supporting their combined use in building transparent, auditable, and trustworthy AI systems in digital forensic applications.https://www.mdpi.com/2076-3417/15/13/7329explainable artificial intelligence (XAI)intrusion detection system (IDS)digital forensicsSHAPLIMEinterpretability evaluation |
| spellingShingle | Pamela Hermosilla Sebastián Berríos Héctor Allende-Cid Explainable AI for Forensic Analysis: A Comparative Study of SHAP and LIME in Intrusion Detection Models Applied Sciences explainable artificial intelligence (XAI) intrusion detection system (IDS) digital forensics SHAP LIME interpretability evaluation |
| title | Explainable AI for Forensic Analysis: A Comparative Study of SHAP and LIME in Intrusion Detection Models |
| title_full | Explainable AI for Forensic Analysis: A Comparative Study of SHAP and LIME in Intrusion Detection Models |
| title_fullStr | Explainable AI for Forensic Analysis: A Comparative Study of SHAP and LIME in Intrusion Detection Models |
| title_full_unstemmed | Explainable AI for Forensic Analysis: A Comparative Study of SHAP and LIME in Intrusion Detection Models |
| title_short | Explainable AI for Forensic Analysis: A Comparative Study of SHAP and LIME in Intrusion Detection Models |
| title_sort | explainable ai for forensic analysis a comparative study of shap and lime in intrusion detection models |
| topic | explainable artificial intelligence (XAI) intrusion detection system (IDS) digital forensics SHAP LIME interpretability evaluation |
| url | https://www.mdpi.com/2076-3417/15/13/7329 |
| work_keys_str_mv | AT pamelahermosilla explainableaiforforensicanalysisacomparativestudyofshapandlimeinintrusiondetectionmodels AT sebastianberrios explainableaiforforensicanalysisacomparativestudyofshapandlimeinintrusiondetectionmodels AT hectorallendecid explainableaiforforensicanalysisacomparativestudyofshapandlimeinintrusiondetectionmodels |