HuberAIME: A Robust Approach to Explainable AI in the Presence of Outliers

With the increasing accuracy of machine-learning models in recent years, explainable artificial intelligence (XAI), which allows for an understanding of the internal decisions made by these models, has become essential. However, many explanation methods are vulnerable to outliers and noise, and the...

Full description

Saved in:
Bibliographic Details
Main Author: Takafumi Nakanishi
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10979913/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849722413353271296
author Takafumi Nakanishi
author_facet Takafumi Nakanishi
author_sort Takafumi Nakanishi
collection DOAJ
description With the increasing accuracy of machine-learning models in recent years, explainable artificial intelligence (XAI), which allows for an understanding of the internal decisions made by these models, has become essential. However, many explanation methods are vulnerable to outliers and noise, and the results may be distorted by extreme values. This study devised a new method named HuberAIME, which is a variant of approximate inverse model explanations (AIME) and is robust to the Huber loss. HuberAIME limits the impact of outliers by weighting with iterative reweighted least squares and prevents the feature importance estimation of AIME from being degraded by extreme data points. Comparative experiments were conducted using the Wine dataset, which has almost no outliers, the Adult dataset, which contains extreme values, and the Statlog (German Credit) dataset, which has moderate outliers, to demonstrate the effectiveness of the proposed method. SHapley Additive exPlanations, AIME, and HuberAIME were evaluated using six metrics (explanatory accuracy, sparsity, stability, computational efficiency, robustness, and completeness). HuberAIME was equivalent to AIME on the Wine dataset. However, it outperformed AIME on the Adult dataset, exhibiting high fidelity and stability. On the Germain Credit dataset, AIME itself showed a certain degree of robustness, and there was no significant difference between AIME and HuberAIME. Overall, HuberAIME is useful for data that include serious outliers and maintains the same explanatory performance as AIME in cases of few outliers. Thus, HuberAIME is expected to improve the reliability of actual operations as a robust XAI method.
format Article
id doaj-art-df463d52f04345ac8d1c4d44ed30c85b
institution DOAJ
issn 2169-3536
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-df463d52f04345ac8d1c4d44ed30c85b2025-08-20T03:11:21ZengIEEEIEEE Access2169-35362025-01-0113767967681010.1109/ACCESS.2025.356527910979913HuberAIME: A Robust Approach to Explainable AI in the Presence of OutliersTakafumi Nakanishi0https://orcid.org/0000-0003-1029-6063School of Computer Science, Tokyo University of Technology, Hachioji-shi, Tokyo, JapanWith the increasing accuracy of machine-learning models in recent years, explainable artificial intelligence (XAI), which allows for an understanding of the internal decisions made by these models, has become essential. However, many explanation methods are vulnerable to outliers and noise, and the results may be distorted by extreme values. This study devised a new method named HuberAIME, which is a variant of approximate inverse model explanations (AIME) and is robust to the Huber loss. HuberAIME limits the impact of outliers by weighting with iterative reweighted least squares and prevents the feature importance estimation of AIME from being degraded by extreme data points. Comparative experiments were conducted using the Wine dataset, which has almost no outliers, the Adult dataset, which contains extreme values, and the Statlog (German Credit) dataset, which has moderate outliers, to demonstrate the effectiveness of the proposed method. SHapley Additive exPlanations, AIME, and HuberAIME were evaluated using six metrics (explanatory accuracy, sparsity, stability, computational efficiency, robustness, and completeness). HuberAIME was equivalent to AIME on the Wine dataset. However, it outperformed AIME on the Adult dataset, exhibiting high fidelity and stability. On the Germain Credit dataset, AIME itself showed a certain degree of robustness, and there was no significant difference between AIME and HuberAIME. Overall, HuberAIME is useful for data that include serious outliers and maintains the same explanatory performance as AIME in cases of few outliers. Thus, HuberAIME is expected to improve the reliability of actual operations as a robust XAI method.https://ieeexplore.ieee.org/document/10979913/Approximate inverse model explanationsexplainable AIglobal feature importanceHuber lossiterative reweighted least squaresmodel-agnostic explanations
spellingShingle Takafumi Nakanishi
HuberAIME: A Robust Approach to Explainable AI in the Presence of Outliers
IEEE Access
Approximate inverse model explanations
explainable AI
global feature importance
Huber loss
iterative reweighted least squares
model-agnostic explanations
title HuberAIME: A Robust Approach to Explainable AI in the Presence of Outliers
title_full HuberAIME: A Robust Approach to Explainable AI in the Presence of Outliers
title_fullStr HuberAIME: A Robust Approach to Explainable AI in the Presence of Outliers
title_full_unstemmed HuberAIME: A Robust Approach to Explainable AI in the Presence of Outliers
title_short HuberAIME: A Robust Approach to Explainable AI in the Presence of Outliers
title_sort huberaime a robust approach to explainable ai in the presence of outliers
topic Approximate inverse model explanations
explainable AI
global feature importance
Huber loss
iterative reweighted least squares
model-agnostic explanations
url https://ieeexplore.ieee.org/document/10979913/
work_keys_str_mv AT takafuminakanishi huberaimearobustapproachtoexplainableaiinthepresenceofoutliers