A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME
eXplainable artificial intelligence (XAI) methods have emerged to convert the black box of machine learning (ML) models into a more digestible form. These methods help to communicate how the model works with the aim of making ML models more transparent and increasing the trust of end‐users in their...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2025-01-01
|
Series: | Advanced Intelligent Systems |
Subjects: | |
Online Access: | https://doi.org/10.1002/aisy.202400304 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832592563385663488 |
---|---|
author | Ahmed M. Salih Zahra Raisi‐Estabragh Ilaria Boscolo Galazzo Petia Radeva Steffen E. Petersen Karim Lekadir Gloria Menegaz |
author_facet | Ahmed M. Salih Zahra Raisi‐Estabragh Ilaria Boscolo Galazzo Petia Radeva Steffen E. Petersen Karim Lekadir Gloria Menegaz |
author_sort | Ahmed M. Salih |
collection | DOAJ |
description | eXplainable artificial intelligence (XAI) methods have emerged to convert the black box of machine learning (ML) models into a more digestible form. These methods help to communicate how the model works with the aim of making ML models more transparent and increasing the trust of end‐users in their output. SHapley Additive exPlanations (SHAP) and Local Interpretable Model Agnostic Explanation (LIME) are two widely used XAI methods, particularly with tabular data. In this perspective piece, the way the explainability metrics of these two methods are generated is discussed and a framework for the interpretation of their outputs, highlighting their weaknesses and strengths is proposed. Specifically, their outcomes in terms of model‐dependency and in the presence of collinearity among the features, relying on a case study from the biomedical domain (classification of individuals with or without myocardial infarction) are discussed. The results indicate that SHAP and LIME are highly affected by the adopted ML model and feature collinearity, raising a note of caution on their usage and interpretation. |
format | Article |
id | doaj-art-49fbd7132cef4df89d0ba1809822117c |
institution | Kabale University |
issn | 2640-4567 |
language | English |
publishDate | 2025-01-01 |
publisher | Wiley |
record_format | Article |
series | Advanced Intelligent Systems |
spelling | doaj-art-49fbd7132cef4df89d0ba1809822117c2025-01-21T07:26:27ZengWileyAdvanced Intelligent Systems2640-45672025-01-0171n/an/a10.1002/aisy.202400304A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIMEAhmed M. Salih0Zahra Raisi‐Estabragh1Ilaria Boscolo Galazzo2Petia Radeva3Steffen E. Petersen4Karim Lekadir5Gloria Menegaz6William Harvey Research Institute NIHR Barts Biomedical Research Centre Queen Mary University of London London E1 4NS UKWilliam Harvey Research Institute NIHR Barts Biomedical Research Centre Queen Mary University of London London E1 4NS UKDepartment of Engineering for Innovation Medicine University of Verona 37129 Verona ItalyDepartment of de Matemàtiques i Informàtica University of Barcelona 08007 Barcelona SpainWilliam Harvey Research Institute NIHR Barts Biomedical Research Centre Queen Mary University of London London E1 4NS UKDepartment of de Matemàtiques i Informàtica University of Barcelona 08007 Barcelona SpainDepartment of Engineering for Innovation Medicine University of Verona 37129 Verona ItalyeXplainable artificial intelligence (XAI) methods have emerged to convert the black box of machine learning (ML) models into a more digestible form. These methods help to communicate how the model works with the aim of making ML models more transparent and increasing the trust of end‐users in their output. SHapley Additive exPlanations (SHAP) and Local Interpretable Model Agnostic Explanation (LIME) are two widely used XAI methods, particularly with tabular data. In this perspective piece, the way the explainability metrics of these two methods are generated is discussed and a framework for the interpretation of their outputs, highlighting their weaknesses and strengths is proposed. Specifically, their outcomes in terms of model‐dependency and in the presence of collinearity among the features, relying on a case study from the biomedical domain (classification of individuals with or without myocardial infarction) are discussed. The results indicate that SHAP and LIME are highly affected by the adopted ML model and feature collinearity, raising a note of caution on their usage and interpretation.https://doi.org/10.1002/aisy.202400304collinearityinterpretabilityLocal Interpretable Model Agnostic ExplanationSHapley Additive exPlanationseXplainable artificial intelligence |
spellingShingle | Ahmed M. Salih Zahra Raisi‐Estabragh Ilaria Boscolo Galazzo Petia Radeva Steffen E. Petersen Karim Lekadir Gloria Menegaz A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME Advanced Intelligent Systems collinearity interpretability Local Interpretable Model Agnostic Explanation SHapley Additive exPlanations eXplainable artificial intelligence |
title | A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME |
title_full | A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME |
title_fullStr | A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME |
title_full_unstemmed | A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME |
title_short | A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME |
title_sort | perspective on explainable artificial intelligence methods shap and lime |
topic | collinearity interpretability Local Interpretable Model Agnostic Explanation SHapley Additive exPlanations eXplainable artificial intelligence |
url | https://doi.org/10.1002/aisy.202400304 |
work_keys_str_mv | AT ahmedmsalih aperspectiveonexplainableartificialintelligencemethodsshapandlime AT zahraraisiestabragh aperspectiveonexplainableartificialintelligencemethodsshapandlime AT ilariaboscologalazzo aperspectiveonexplainableartificialintelligencemethodsshapandlime AT petiaradeva aperspectiveonexplainableartificialintelligencemethodsshapandlime AT steffenepetersen aperspectiveonexplainableartificialintelligencemethodsshapandlime AT karimlekadir aperspectiveonexplainableartificialintelligencemethodsshapandlime AT gloriamenegaz aperspectiveonexplainableartificialintelligencemethodsshapandlime AT ahmedmsalih perspectiveonexplainableartificialintelligencemethodsshapandlime AT zahraraisiestabragh perspectiveonexplainableartificialintelligencemethodsshapandlime AT ilariaboscologalazzo perspectiveonexplainableartificialintelligencemethodsshapandlime AT petiaradeva perspectiveonexplainableartificialintelligencemethodsshapandlime AT steffenepetersen perspectiveonexplainableartificialintelligencemethodsshapandlime AT karimlekadir perspectiveonexplainableartificialintelligencemethodsshapandlime AT gloriamenegaz perspectiveonexplainableartificialintelligencemethodsshapandlime |