Using Explainable AI to Measure Feature Contribution to Uncertainty

The application of artificial intelligence techniques in safety-critical domains such as medicine and self-driving vehicles has raised questions regarding its trustworthiness and reliability. One well-researched avenue for improving trust in and reliability of deep learning is uncertainty quantifica...

Full description

Saved in:
Bibliographic Details
Main Authors: Katherine Elizabeth Brown, Douglas A. Talbert
Format: Article
Language:English
Published: LibraryPress@UF 2022-05-01
Series:Proceedings of the International Florida Artificial Intelligence Research Society Conference
Subjects:
Online Access:https://journals.flvc.org/FLAIRS/article/view/130662
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849763309485555712
author Katherine Elizabeth Brown
Douglas A. Talbert
author_facet Katherine Elizabeth Brown
Douglas A. Talbert
author_sort Katherine Elizabeth Brown
collection DOAJ
description The application of artificial intelligence techniques in safety-critical domains such as medicine and self-driving vehicles has raised questions regarding its trustworthiness and reliability. One well-researched avenue for improving trust in and reliability of deep learning is uncertainty quantification. \textit{Uncertainty} measures the algorithm’s lack of trust in its predictions, and this information is important for practitioners using machine learning-based decision support. A variety of techniques exist that produce uncertainty estimations for machine learning predictions; however, very few techniques attempt to explain why that uncertainty exists in the prediction. Explainable Artificial Intelligence (XAI) is an umbrella term that encompasses techniques that provide some level of transparency to machine learning predictions. This can include information on which inputs contributed to or detracted from the algorithm’s prediction. This work focuses on applying existing XAI techniques to deep neural networks to understand how features contribute to epistemic uncertainty. Epistemic uncertainty is a measure of confidence in a prediction given the training data distribution upon which the neural network was trained. In this work, we apply common feature attribution XAI techniques to efficiently deduce explanations of epistemic uncertainty in deep neural networks.
format Article
id doaj-art-245d13937fb74398b6da43c6ea9eb9bb
institution DOAJ
issn 2334-0754
2334-0762
language English
publishDate 2022-05-01
publisher LibraryPress@UF
record_format Article
series Proceedings of the International Florida Artificial Intelligence Research Society Conference
spelling doaj-art-245d13937fb74398b6da43c6ea9eb9bb2025-08-20T03:05:26ZengLibraryPress@UFProceedings of the International Florida Artificial Intelligence Research Society Conference2334-07542334-07622022-05-013510.32473/flairs.v35i.13066266861Using Explainable AI to Measure Feature Contribution to UncertaintyKatherine Elizabeth Brown0Douglas A. Talbert1Tennessee Tech UniversityTennessee Technological UniversityThe application of artificial intelligence techniques in safety-critical domains such as medicine and self-driving vehicles has raised questions regarding its trustworthiness and reliability. One well-researched avenue for improving trust in and reliability of deep learning is uncertainty quantification. \textit{Uncertainty} measures the algorithm’s lack of trust in its predictions, and this information is important for practitioners using machine learning-based decision support. A variety of techniques exist that produce uncertainty estimations for machine learning predictions; however, very few techniques attempt to explain why that uncertainty exists in the prediction. Explainable Artificial Intelligence (XAI) is an umbrella term that encompasses techniques that provide some level of transparency to machine learning predictions. This can include information on which inputs contributed to or detracted from the algorithm’s prediction. This work focuses on applying existing XAI techniques to deep neural networks to understand how features contribute to epistemic uncertainty. Epistemic uncertainty is a measure of confidence in a prediction given the training data distribution upon which the neural network was trained. In this work, we apply common feature attribution XAI techniques to efficiently deduce explanations of epistemic uncertainty in deep neural networks.https://journals.flvc.org/FLAIRS/article/view/130662deep learninguncertainty quantificationexplainable ai
spellingShingle Katherine Elizabeth Brown
Douglas A. Talbert
Using Explainable AI to Measure Feature Contribution to Uncertainty
Proceedings of the International Florida Artificial Intelligence Research Society Conference
deep learning
uncertainty quantification
explainable ai
title Using Explainable AI to Measure Feature Contribution to Uncertainty
title_full Using Explainable AI to Measure Feature Contribution to Uncertainty
title_fullStr Using Explainable AI to Measure Feature Contribution to Uncertainty
title_full_unstemmed Using Explainable AI to Measure Feature Contribution to Uncertainty
title_short Using Explainable AI to Measure Feature Contribution to Uncertainty
title_sort using explainable ai to measure feature contribution to uncertainty
topic deep learning
uncertainty quantification
explainable ai
url https://journals.flvc.org/FLAIRS/article/view/130662
work_keys_str_mv AT katherineelizabethbrown usingexplainableaitomeasurefeaturecontributiontouncertainty
AT douglasatalbert usingexplainableaitomeasurefeaturecontributiontouncertainty