The shallowest transparent and interpretable deep neural network for image recognition

Abstract Trusting the decisions of deep learning models requires transparency of their reasoning process, especially for high-risk decisions. In this paper, a fully transparent deep learning model (Shallow-ProtoPNet) is introduced. This model consists of a transparent prototype layer, followed by an...

Full description

Saved in:
Bibliographic Details
Main Authors: Gurmail Singh, Stefano Frizzo Stefenon, Kin-Choong Yow
Format: Article
Language:English
Published: Nature Portfolio 2025-04-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-92945-2
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849713595118518272
author Gurmail Singh
Stefano Frizzo Stefenon
Kin-Choong Yow
author_facet Gurmail Singh
Stefano Frizzo Stefenon
Kin-Choong Yow
author_sort Gurmail Singh
collection DOAJ
description Abstract Trusting the decisions of deep learning models requires transparency of their reasoning process, especially for high-risk decisions. In this paper, a fully transparent deep learning model (Shallow-ProtoPNet) is introduced. This model consists of a transparent prototype layer, followed by an indispensable fully connected layer that connects prototypes and logits, whereas usually, interpretable models are not fully transparent because they use some black-box part as their baseline. This is the difference between Shallow-ProtoPNet and prototypical part network (ProtoPNet), the proposed Shallow-ProtoPNet does not use any black box part as a baseline, whereas ProtoPNet uses convolutional layers of black-box models as the baseline. On the dataset of X-ray images, the performance of the model is comparable to the other interpretable models that are not completely transparent. Since Shallow-ProtoPNet has only one (transparent) convolutional layer and a fully connected layer, it is the shallowest transparent deep neural network with only two layers between the input and output layers. Therefore, the size of our model is much smaller than that of its counterparts, making it suitable for use in embedded systems.
format Article
id doaj-art-2fe32381dd16479db6afe3ea4c0a59cd
institution DOAJ
issn 2045-2322
language English
publishDate 2025-04-01
publisher Nature Portfolio
record_format Article
series Scientific Reports
spelling doaj-art-2fe32381dd16479db6afe3ea4c0a59cd2025-08-20T03:13:55ZengNature PortfolioScientific Reports2045-23222025-04-0115111210.1038/s41598-025-92945-2The shallowest transparent and interpretable deep neural network for image recognitionGurmail Singh0Stefano Frizzo Stefenon1Kin-Choong Yow2Department of Computer Sciences, University of Wisconsin-MadisonFaculty of Engineering and Applied Sciences, University of ReginaFaculty of Engineering and Applied Sciences, University of ReginaAbstract Trusting the decisions of deep learning models requires transparency of their reasoning process, especially for high-risk decisions. In this paper, a fully transparent deep learning model (Shallow-ProtoPNet) is introduced. This model consists of a transparent prototype layer, followed by an indispensable fully connected layer that connects prototypes and logits, whereas usually, interpretable models are not fully transparent because they use some black-box part as their baseline. This is the difference between Shallow-ProtoPNet and prototypical part network (ProtoPNet), the proposed Shallow-ProtoPNet does not use any black box part as a baseline, whereas ProtoPNet uses convolutional layers of black-box models as the baseline. On the dataset of X-ray images, the performance of the model is comparable to the other interpretable models that are not completely transparent. Since Shallow-ProtoPNet has only one (transparent) convolutional layer and a fully connected layer, it is the shallowest transparent deep neural network with only two layers between the input and output layers. Therefore, the size of our model is much smaller than that of its counterparts, making it suitable for use in embedded systems.https://doi.org/10.1038/s41598-025-92945-2Deep learningImage classificationInterpretable modelsPrototypical part network
spellingShingle Gurmail Singh
Stefano Frizzo Stefenon
Kin-Choong Yow
The shallowest transparent and interpretable deep neural network for image recognition
Scientific Reports
Deep learning
Image classification
Interpretable models
Prototypical part network
title The shallowest transparent and interpretable deep neural network for image recognition
title_full The shallowest transparent and interpretable deep neural network for image recognition
title_fullStr The shallowest transparent and interpretable deep neural network for image recognition
title_full_unstemmed The shallowest transparent and interpretable deep neural network for image recognition
title_short The shallowest transparent and interpretable deep neural network for image recognition
title_sort shallowest transparent and interpretable deep neural network for image recognition
topic Deep learning
Image classification
Interpretable models
Prototypical part network
url https://doi.org/10.1038/s41598-025-92945-2
work_keys_str_mv AT gurmailsingh theshallowesttransparentandinterpretabledeepneuralnetworkforimagerecognition
AT stefanofrizzostefenon theshallowesttransparentandinterpretabledeepneuralnetworkforimagerecognition
AT kinchoongyow theshallowesttransparentandinterpretabledeepneuralnetworkforimagerecognition
AT gurmailsingh shallowesttransparentandinterpretabledeepneuralnetworkforimagerecognition
AT stefanofrizzostefenon shallowesttransparentandinterpretabledeepneuralnetworkforimagerecognition
AT kinchoongyow shallowesttransparentandinterpretabledeepneuralnetworkforimagerecognition