A comparative study of explainability methods for whole slide classification of lymph node metastases using vision transformers.

Recent advancements in deep learning have shown promise in enhancing the performance of medical image analysis. In pathology, automated whole slide imaging has transformed clinical workflows by streamlining routine tasks and diagnostic and prognostic support. However, the lack of transparency of dee...

Full description

Saved in:
Bibliographic Details
Main Authors: Jens Rahnfeld, Mehdi Naouar, Gabriel Kalweit, Joschka Boedecker, Estelle Dubruc, Maria Kalweit
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2025-04-01
Series:PLOS Digital Health
Online Access:https://doi.org/10.1371/journal.pdig.0000792
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850199487772884992
author Jens Rahnfeld
Mehdi Naouar
Gabriel Kalweit
Joschka Boedecker
Estelle Dubruc
Maria Kalweit
author_facet Jens Rahnfeld
Mehdi Naouar
Gabriel Kalweit
Joschka Boedecker
Estelle Dubruc
Maria Kalweit
author_sort Jens Rahnfeld
collection DOAJ
description Recent advancements in deep learning have shown promise in enhancing the performance of medical image analysis. In pathology, automated whole slide imaging has transformed clinical workflows by streamlining routine tasks and diagnostic and prognostic support. However, the lack of transparency of deep learning models, often described as black boxes, poses a significant barrier to their clinical adoption. This study evaluates various explainability methods for Vision Transformers, assessing their effectiveness in explaining the rationale behind their classification predictions on histopathological images. Using a Vision Transformer trained on the publicly available CAMELYON16 dataset comprising of 399 whole slide images of lymph node metastases of patients with breast cancer, we conducted a comparative analysis of a diverse range of state-of-the-art techniques for generating explanations through heatmaps, including Attention Rollout, Integrated Gradients, RISE, and ViT-Shapley. Our findings reveal that Attention Rollout and Integrated Gradients are prone to artifacts, while RISE and particularly ViT-Shapley generate more reliable and interpretable heatmaps. ViT-Shapley also demonstrated faster runtime and superior performance in insertion and deletion metrics. These results suggest that integrating ViT-Shapley-based heatmaps into pathology reports could enhance trust and scalability in clinical workflows, facilitating the adoption of explainable artificial intelligence in pathology.
format Article
id doaj-art-34b6bae49c32495eb7dfe08e68afb375
institution OA Journals
issn 2767-3170
language English
publishDate 2025-04-01
publisher Public Library of Science (PLoS)
record_format Article
series PLOS Digital Health
spelling doaj-art-34b6bae49c32495eb7dfe08e68afb3752025-08-20T02:12:36ZengPublic Library of Science (PLoS)PLOS Digital Health2767-31702025-04-0144e000079210.1371/journal.pdig.0000792A comparative study of explainability methods for whole slide classification of lymph node metastases using vision transformers.Jens RahnfeldMehdi NaouarGabriel KalweitJoschka BoedeckerEstelle DubrucMaria KalweitRecent advancements in deep learning have shown promise in enhancing the performance of medical image analysis. In pathology, automated whole slide imaging has transformed clinical workflows by streamlining routine tasks and diagnostic and prognostic support. However, the lack of transparency of deep learning models, often described as black boxes, poses a significant barrier to their clinical adoption. This study evaluates various explainability methods for Vision Transformers, assessing their effectiveness in explaining the rationale behind their classification predictions on histopathological images. Using a Vision Transformer trained on the publicly available CAMELYON16 dataset comprising of 399 whole slide images of lymph node metastases of patients with breast cancer, we conducted a comparative analysis of a diverse range of state-of-the-art techniques for generating explanations through heatmaps, including Attention Rollout, Integrated Gradients, RISE, and ViT-Shapley. Our findings reveal that Attention Rollout and Integrated Gradients are prone to artifacts, while RISE and particularly ViT-Shapley generate more reliable and interpretable heatmaps. ViT-Shapley also demonstrated faster runtime and superior performance in insertion and deletion metrics. These results suggest that integrating ViT-Shapley-based heatmaps into pathology reports could enhance trust and scalability in clinical workflows, facilitating the adoption of explainable artificial intelligence in pathology.https://doi.org/10.1371/journal.pdig.0000792
spellingShingle Jens Rahnfeld
Mehdi Naouar
Gabriel Kalweit
Joschka Boedecker
Estelle Dubruc
Maria Kalweit
A comparative study of explainability methods for whole slide classification of lymph node metastases using vision transformers.
PLOS Digital Health
title A comparative study of explainability methods for whole slide classification of lymph node metastases using vision transformers.
title_full A comparative study of explainability methods for whole slide classification of lymph node metastases using vision transformers.
title_fullStr A comparative study of explainability methods for whole slide classification of lymph node metastases using vision transformers.
title_full_unstemmed A comparative study of explainability methods for whole slide classification of lymph node metastases using vision transformers.
title_short A comparative study of explainability methods for whole slide classification of lymph node metastases using vision transformers.
title_sort comparative study of explainability methods for whole slide classification of lymph node metastases using vision transformers
url https://doi.org/10.1371/journal.pdig.0000792
work_keys_str_mv AT jensrahnfeld acomparativestudyofexplainabilitymethodsforwholeslideclassificationoflymphnodemetastasesusingvisiontransformers
AT mehdinaouar acomparativestudyofexplainabilitymethodsforwholeslideclassificationoflymphnodemetastasesusingvisiontransformers
AT gabrielkalweit acomparativestudyofexplainabilitymethodsforwholeslideclassificationoflymphnodemetastasesusingvisiontransformers
AT joschkaboedecker acomparativestudyofexplainabilitymethodsforwholeslideclassificationoflymphnodemetastasesusingvisiontransformers
AT estelledubruc acomparativestudyofexplainabilitymethodsforwholeslideclassificationoflymphnodemetastasesusingvisiontransformers
AT mariakalweit acomparativestudyofexplainabilitymethodsforwholeslideclassificationoflymphnodemetastasesusingvisiontransformers
AT jensrahnfeld comparativestudyofexplainabilitymethodsforwholeslideclassificationoflymphnodemetastasesusingvisiontransformers
AT mehdinaouar comparativestudyofexplainabilitymethodsforwholeslideclassificationoflymphnodemetastasesusingvisiontransformers
AT gabrielkalweit comparativestudyofexplainabilitymethodsforwholeslideclassificationoflymphnodemetastasesusingvisiontransformers
AT joschkaboedecker comparativestudyofexplainabilitymethodsforwholeslideclassificationoflymphnodemetastasesusingvisiontransformers
AT estelledubruc comparativestudyofexplainabilitymethodsforwholeslideclassificationoflymphnodemetastasesusingvisiontransformers
AT mariakalweit comparativestudyofexplainabilitymethodsforwholeslideclassificationoflymphnodemetastasesusingvisiontransformers