Proto-Caps: interpretable medical image classification using prototype learning and privileged information

Explainable artificial intelligence (xAI) is becoming increasingly important as the need for understanding the model’s reasoning grows when applying them in high-risk areas. This is especially crucial in the field of medicine, where decision support systems are utilised to make diagnoses or to deter...

Full description

Saved in:
Bibliographic Details
Main Authors: Luisa Gallée, Catharina Silvia Lisson, Timo Ropinski, Meinrad Beer, Michael Götz
Format: Article
Language:English
Published: PeerJ Inc. 2025-05-01
Series:PeerJ Computer Science
Subjects:
Online Access:https://peerj.com/articles/cs-2908.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Explainable artificial intelligence (xAI) is becoming increasingly important as the need for understanding the model’s reasoning grows when applying them in high-risk areas. This is especially crucial in the field of medicine, where decision support systems are utilised to make diagnoses or to determine appropriate therapies. Here it is essential to provide intuitive and comprehensive explanations to evaluate the system’s correctness. To meet this need, we have developed Proto-Caps, an intrinsically explainable model for image classification. It explains its decisions by providing visual prototypes that resemble specific appearance features. These characteristics are predefined by humans, which on the one hand makes them understandable and on the other hand leads to the model basing its decision on the same features as the human expert. On two public datasets, this method shows better performance compared to existing explainable approaches, despite the additive explainability modality through the visual prototypes. In addition to the performance evaluations, we conducted an analysis of truthfulness by examining the joint information between the target prediction and its explanation output. This was done in order to ensure that the explanation actually reasons the target classification. Through extensive hyperparameter studies, we also found optimal model settings, providing a starting point for further research. Our work emphasises the prospects of combining xAI approaches for greater explainability and demonstrates that incorporating explainability does not necessarily lead to a loss of performance.
ISSN:2376-5992