Measuring Interpretability: A systematic literature review of interpretability measures in artificial intelligence
Advancement in any field requires approaches for measurement. Failure to build such approaches inhibits improvements within the field. In the context of interpretability in Artificial Intelligence (AI), a lack of widely adopted evaluation and measurement approaches prevents its advance. While some...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
LibraryPress@UF
2025-05-01
|
| Series: | Proceedings of the International Florida Artificial Intelligence Research Society Conference |
| Subjects: | |
| Online Access: | https://journals.flvc.org/FLAIRS/article/view/138992 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Advancement in any field requires approaches for measurement. Failure to build such approaches inhibits improvements
within the field. In the context of interpretability in Artificial Intelligence (AI), a lack of widely adopted evaluation
and measurement approaches prevents its advance. While some approaches in literature propose ways to measure interpretability,
no consensus exists on objective measurement of interpretability. To advance the state-of-the-art, a clear understanding
of these approaches is essential. This paper conducts a systematic review of existing approaches that propose to measure or quantify interpretability and its aspects. The resulting analysis of this review identifies important aspects to consider when measuring interpretability. We found that no approaches directly propose to measure interpretability but instead quantify aspects associated with interpretability. We identify four of these aspects in result of this review.
|
|---|---|
| ISSN: | 2334-0754 2334-0762 |