Advanced grad-CAM extensions for interpretable aphasia speech keyword classification: Bridging the gap in impaired speech with XAI

Aphasia, a language disorder caused by brain injury, presents significant speech recognition and classification challenges due to irregular speech patterns. While the standard Grad-CAM (Gradient-weighted Class Activation Mapping) technique is widely used for model interpretation, its application to...

Full description

Saved in:
Bibliographic Details
Main Authors: Gowri Prasood Usha, John Sahaya Rani Alex
Format: Article
Language:English
Published: Elsevier 2024-12-01
Series:Results in Engineering
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2590123024016669
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Aphasia, a language disorder caused by brain injury, presents significant speech recognition and classification challenges due to irregular speech patterns. While the standard Grad-CAM (Gradient-weighted Class Activation Mapping) technique is widely used for model interpretation, its application to impaired speech remains largely unexplored. To address this gap, we introduce a set of extension studies of enhanced Grad-CAM techniques, namely Enhanced Directional Grad-CAM (ED-GCAM), Multi-Scale Channel-wise Grad-CAM (MSCW-GCAM), Stochastic Gradient-Dropout Integrated Grad-CAM (SGD-GCAM), and Enhanced Hierarchical Filtered Grad-CAM (EH-FCAM) to improve interpretability and performance in aphasia speech keyword classification. When applied to attention-based CNN models, these techniques generate more focused, class-specific heatmaps, providing a deeper understanding of model behaviour, particularly in noisy and impaired speech. Our results demonstrate that these enhanced Grad-CAM methods outperform the standard Grad-CAM by offering more detailed and meaningful explanations, which is critical for interpreting models applied to aphasia speech. We compare our approach using qualitative and perturbation-based trustworthiness, infidelity and sufficiency scores as quantitative metrics. Among the techniques, ED-GCAM outperformed all others. The proposed methods significantly improve the accuracy and transparency of speech processing models, with potential suggestions for clinical applications.
ISSN:2590-1230