Using Explainable AI to Measure Feature Contribution to Uncertainty
The application of artificial intelligence techniques in safety-critical domains such as medicine and self-driving vehicles has raised questions regarding its trustworthiness and reliability. One well-researched avenue for improving trust in and reliability of deep learning is uncertainty quantifica...
Saved in:
| Main Authors: | Katherine Elizabeth Brown, Douglas A. Talbert |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
LibraryPress@UF
2022-05-01
|
| Series: | Proceedings of the International Florida Artificial Intelligence Research Society Conference |
| Subjects: | |
| Online Access: | https://journals.flvc.org/FLAIRS/article/view/130662 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Explainable and Uncertainty Aware AI-Based Ransomware Detection
by: Henry Kabuye, et al.
Published: (2025-01-01) -
The effectiveness of explainable AI on human factors in trust models
by: Justin C. Cheung, et al.
Published: (2025-07-01) -
An explainable AI-based framework for predicting and optimizing blast-induced ground vibrations in surface mining
by: Charan Kumar Ala, et al.
Published: (2025-09-01) -
Explainable AI supported hybrid deep learnig method for layer 2 intrusion detection
by: Ilhan Firat Kilincer
Published: (2025-06-01) -
Editorial: Explainable, trustworthy, and responsible AI in image processing
by: Akshay Agarwal
Published: (2025-05-01)