How informative is your XAI? Assessing the quality of explanations through information power
A growing consensus emphasizes the efficacy of user-centered and personalized approaches within the field of explainable artificial intelligence (XAI). The proliferation of diverse explanation strategies in recent years promises to improve the interaction between humans and explainable agents. This...
Saved in:
Main Authors: | Marco Matarese, Francesco Rea, Katharina J. Rohlfing, Alessandra Sciutti |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2025-01-01
|
Series: | Frontiers in Computer Science |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fcomp.2024.1412341/full |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Explainable AI chatbots towards XAI ChatGPT: A review
by: Attila Kovari
Published: (2025-01-01) -
Description-based Post-hoc Explanation for Twitter List Recommendations
by: Havva Alizadeh Noughabi, et al.
Published: (2024-12-01) -
XAI-Enhanced Machine Learning for Obesity Risk Classification: A Stacking Approach With LIME Explanations
by: Mohammad Azad, et al.
Published: (2025-01-01) -
The Machine as an Autonomous Explanatory Agent
by: Dilek Yargan
Published: (2024-07-01) -
Dual feature-based and example-based explanation methods
by: Andrei Konstantinov, et al.
Published: (2025-02-01)