Class Activation Map Guided Backpropagation for Discriminative Explanations
The interpretability of neural networks has garnered significant attention. In the domain of computer vision, gradient-based feature attribution techniques like RectGrad have been proposed to utilize saliency maps to demonstrate feature contributions to predictions. Despite advancements, RectGrad fa...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/15/1/379 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1841549368538169344 |
---|---|
author | Yongjie Liu Wei Guo Xudong Lu Lanju Kong Zhongmin Yan |
author_facet | Yongjie Liu Wei Guo Xudong Lu Lanju Kong Zhongmin Yan |
author_sort | Yongjie Liu |
collection | DOAJ |
description | The interpretability of neural networks has garnered significant attention. In the domain of computer vision, gradient-based feature attribution techniques like RectGrad have been proposed to utilize saliency maps to demonstrate feature contributions to predictions. Despite advancements, RectGrad falls short in category discrimination, producing similar saliency maps across categories. This paper pinpoints the ineffectiveness of threshold-based strategies in RectGrad for distinguishing feature gradients and introduces Class activation map Guided BackPropagation (CGBP) to tackle the issue. CGBP leverages class activation maps during backpropagation to enhance gradient selection, achieving consistent improvements across four models (VGG16, VGG19, ResNet50, and ResNet101) on ImageNet’s validation set. Notably, on VGG16, CGBP improves SIC, AIC, and IS scores by 10.3%, 11.5%, and 4.5%, respectively, compared to RectGrad while maintaining competitive DS performance. Moreover, CGBP demonstrates greater sensitivity to model parameter changes than RectGrad, as confirmed by a sanity check. The proposed method has broad applicability in scenarios like model debugging, where it identifies causes of misclassification, and medical image diagnosis, where it enhances user trust by aligning visual explanations with clinical insights. |
format | Article |
id | doaj-art-a71e0149751945889949cfcaa352ec03 |
institution | Kabale University |
issn | 2076-3417 |
language | English |
publishDate | 2025-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Applied Sciences |
spelling | doaj-art-a71e0149751945889949cfcaa352ec032025-01-10T13:15:21ZengMDPI AGApplied Sciences2076-34172025-01-0115137910.3390/app15010379Class Activation Map Guided Backpropagation for Discriminative ExplanationsYongjie Liu0Wei Guo1Xudong Lu2Lanju Kong3Zhongmin Yan4School of Software, Shandong University, Jinan 250000, ChinaSchool of Software, Shandong University, Jinan 250000, ChinaSchool of Software, Shandong University, Jinan 250000, ChinaSchool of Software, Shandong University, Jinan 250000, ChinaSchool of Software, Shandong University, Jinan 250000, ChinaThe interpretability of neural networks has garnered significant attention. In the domain of computer vision, gradient-based feature attribution techniques like RectGrad have been proposed to utilize saliency maps to demonstrate feature contributions to predictions. Despite advancements, RectGrad falls short in category discrimination, producing similar saliency maps across categories. This paper pinpoints the ineffectiveness of threshold-based strategies in RectGrad for distinguishing feature gradients and introduces Class activation map Guided BackPropagation (CGBP) to tackle the issue. CGBP leverages class activation maps during backpropagation to enhance gradient selection, achieving consistent improvements across four models (VGG16, VGG19, ResNet50, and ResNet101) on ImageNet’s validation set. Notably, on VGG16, CGBP improves SIC, AIC, and IS scores by 10.3%, 11.5%, and 4.5%, respectively, compared to RectGrad while maintaining competitive DS performance. Moreover, CGBP demonstrates greater sensitivity to model parameter changes than RectGrad, as confirmed by a sanity check. The proposed method has broad applicability in scenarios like model debugging, where it identifies causes of misclassification, and medical image diagnosis, where it enhances user trust by aligning visual explanations with clinical insights.https://www.mdpi.com/2076-3417/15/1/379interpretabilitygradient-based feature attributionclass activation map |
spellingShingle | Yongjie Liu Wei Guo Xudong Lu Lanju Kong Zhongmin Yan Class Activation Map Guided Backpropagation for Discriminative Explanations Applied Sciences interpretability gradient-based feature attribution class activation map |
title | Class Activation Map Guided Backpropagation for Discriminative Explanations |
title_full | Class Activation Map Guided Backpropagation for Discriminative Explanations |
title_fullStr | Class Activation Map Guided Backpropagation for Discriminative Explanations |
title_full_unstemmed | Class Activation Map Guided Backpropagation for Discriminative Explanations |
title_short | Class Activation Map Guided Backpropagation for Discriminative Explanations |
title_sort | class activation map guided backpropagation for discriminative explanations |
topic | interpretability gradient-based feature attribution class activation map |
url | https://www.mdpi.com/2076-3417/15/1/379 |
work_keys_str_mv | AT yongjieliu classactivationmapguidedbackpropagationfordiscriminativeexplanations AT weiguo classactivationmapguidedbackpropagationfordiscriminativeexplanations AT xudonglu classactivationmapguidedbackpropagationfordiscriminativeexplanations AT lanjukong classactivationmapguidedbackpropagationfordiscriminativeexplanations AT zhongminyan classactivationmapguidedbackpropagationfordiscriminativeexplanations |