Interpretable Deep Learning for Pneumonia Detection Using Chest X-Ray Images

Pneumonia remains a global health issue, creating the need for accurate detection methods for effective treatment. Deep learning models like ResNet50 show promise in detecting pneumonia from chest X-rays; however, their black-box nature limits the transparency, which fails to meet that needed for cl...

Full description

Saved in:
Bibliographic Details
Main Authors: Jovito Colin, Nico Surantha
Format: Article
Language:English
Published: MDPI AG 2025-01-01
Series:Information
Subjects:
Online Access:https://www.mdpi.com/2078-2489/16/1/53
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832588334166179840
author Jovito Colin
Nico Surantha
author_facet Jovito Colin
Nico Surantha
author_sort Jovito Colin
collection DOAJ
description Pneumonia remains a global health issue, creating the need for accurate detection methods for effective treatment. Deep learning models like ResNet50 show promise in detecting pneumonia from chest X-rays; however, their black-box nature limits the transparency, which fails to meet that needed for clinical trust. This study aims to improve model interpretability by comparing four interpretability techniques, which are Layer-wise Relevance Propagation (LRP), Adversarial Training, Class Activation Maps (CAMs), and the Spatial Attention Mechanism, and determining which fits best the model, enhancing its transparency with minimal impact on its performance. Each technique was evaluated for its impact on the accuracy, sensitivity, specificity, AUC-ROC, Mean Relevance Score (MRS), and a calculated trade-off score that balances interpretability and performance. The results indicate that LRP was the most effective in enhancing interpretability, achieving high scores across all metrics without sacrificing diagnostic accuracy. The model achieved 0.91 accuracy and 0.85 interpretability (MRS), demonstrating its potential for clinical integration. In contrast, Adversarial Training, CAMs, and the Spatial Attention Mechanism showed trade-offs between interpretability and performance, each highlighting unique image features but with some impact on specificity and accuracy.
format Article
id doaj-art-35ce4a1f57ef4d16a4d72e7b0509f9a2
institution Kabale University
issn 2078-2489
language English
publishDate 2025-01-01
publisher MDPI AG
record_format Article
series Information
spelling doaj-art-35ce4a1f57ef4d16a4d72e7b0509f9a22025-01-24T13:35:17ZengMDPI AGInformation2078-24892025-01-011615310.3390/info16010053Interpretable Deep Learning for Pneumonia Detection Using Chest X-Ray ImagesJovito Colin0Nico Surantha1Computer Science Department, BINUS Graduate Program—Master of Computer Science, Bina Nusantara University, Jakarta 11480, IndonesiaComputer Science Department, BINUS Graduate Program—Master of Computer Science, Bina Nusantara University, Jakarta 11480, IndonesiaPneumonia remains a global health issue, creating the need for accurate detection methods for effective treatment. Deep learning models like ResNet50 show promise in detecting pneumonia from chest X-rays; however, their black-box nature limits the transparency, which fails to meet that needed for clinical trust. This study aims to improve model interpretability by comparing four interpretability techniques, which are Layer-wise Relevance Propagation (LRP), Adversarial Training, Class Activation Maps (CAMs), and the Spatial Attention Mechanism, and determining which fits best the model, enhancing its transparency with minimal impact on its performance. Each technique was evaluated for its impact on the accuracy, sensitivity, specificity, AUC-ROC, Mean Relevance Score (MRS), and a calculated trade-off score that balances interpretability and performance. The results indicate that LRP was the most effective in enhancing interpretability, achieving high scores across all metrics without sacrificing diagnostic accuracy. The model achieved 0.91 accuracy and 0.85 interpretability (MRS), demonstrating its potential for clinical integration. In contrast, Adversarial Training, CAMs, and the Spatial Attention Mechanism showed trade-offs between interpretability and performance, each highlighting unique image features but with some impact on specificity and accuracy.https://www.mdpi.com/2078-2489/16/1/53pneumonia detectioninterpretable deep learningLayer-wise Relevance PropagationAdversarial TrainingClass Activation MapsAttention Mechanisms
spellingShingle Jovito Colin
Nico Surantha
Interpretable Deep Learning for Pneumonia Detection Using Chest X-Ray Images
Information
pneumonia detection
interpretable deep learning
Layer-wise Relevance Propagation
Adversarial Training
Class Activation Maps
Attention Mechanisms
title Interpretable Deep Learning for Pneumonia Detection Using Chest X-Ray Images
title_full Interpretable Deep Learning for Pneumonia Detection Using Chest X-Ray Images
title_fullStr Interpretable Deep Learning for Pneumonia Detection Using Chest X-Ray Images
title_full_unstemmed Interpretable Deep Learning for Pneumonia Detection Using Chest X-Ray Images
title_short Interpretable Deep Learning for Pneumonia Detection Using Chest X-Ray Images
title_sort interpretable deep learning for pneumonia detection using chest x ray images
topic pneumonia detection
interpretable deep learning
Layer-wise Relevance Propagation
Adversarial Training
Class Activation Maps
Attention Mechanisms
url https://www.mdpi.com/2078-2489/16/1/53
work_keys_str_mv AT jovitocolin interpretabledeeplearningforpneumoniadetectionusingchestxrayimages
AT nicosurantha interpretabledeeplearningforpneumoniadetectionusingchestxrayimages