A comprehensive analysis of perturbation methods in explainable AI feature attribution validation for neural time series classifiers
Abstract In domains where AI model predictions have significant consequences, such as industry, medicine, and finance, the need for explainable AI (XAI) is of utmost importance. However, ensuring that explanation methods provide faithful and trustworthy explanations requires rigorous validation. Fea...
Saved in:
| Main Authors: | Ilija Šimić, Eduardo Veas, Vedran Sabol |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-07-01
|
| Series: | Scientific Reports |
| Subjects: | |
| Online Access: | https://doi.org/10.1038/s41598-025-09538-2 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
P-TAME: Explain Any Image Classifier With Trained Perturbations
by: Mariano V. Ntrougkas, et al.
Published: (2025-01-01) -
Explainable AI for time series prediction in economic mental health analysis
by: Ying Yang, et al.
Published: (2025-06-01) -
Predicting Diabetic Distress and Emotional Burden in Type-2 Diabetes Using Explainable AI
by: Ali Al Bataineh, et al.
Published: (2025-01-01) -
A latent diffusion approach to visual attribution in medical imaging
by: Ammar Adeel Siddiqui, et al.
Published: (2025-01-01) -
The effectiveness of explainable AI on human factors in trust models
by: Justin C. Cheung, et al.
Published: (2025-07-01)