Building consistency in explanations: Harmonizing CNN attributions for satellite-based land cover classification

Explainable machine learning has gained substantial attention for its role in enhancing transparency and trust in computer vision applications. Attribution methods like Grad-CAM and occlusion sensitivity analysis are frequently used to identify how features contribute to predictions of neural networ...

Full description

Saved in:
Bibliographic Details
Main Authors: Timo T. Stomberg, Lennart A. Reißner, Martin G. Schultz, Ribana Roscher
Format: Article
Language:English
Published: Elsevier 2025-06-01
Series:Machine Learning with Applications
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2666827025000362
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850103206846136320
author Timo T. Stomberg
Lennart A. Reißner
Martin G. Schultz
Ribana Roscher
author_facet Timo T. Stomberg
Lennart A. Reißner
Martin G. Schultz
Ribana Roscher
author_sort Timo T. Stomberg
collection DOAJ
description Explainable machine learning has gained substantial attention for its role in enhancing transparency and trust in computer vision applications. Attribution methods like Grad-CAM and occlusion sensitivity analysis are frequently used to identify how features contribute to predictions of neural networks. However, a key challenge is that different attribution methods often produce different outcomes undermining trust in their results. Furthermore, the unique characteristics of remote sensing imagery pose additional challenges for attribution interpretation: it primarily comprises continuous “stuff” classes rather than objects, exhibits fine-grained spatial variability, contains mixed pixels, is often multispectral, and exhibits spatially heterogeneity. To tackle this challenge, we present a novel methodology that harmonizes attributions, resulting in: 1. greater consistency across different attribution methods; 2. more meaningful explanations when validated against known segmentation ground truth; and 3. enhanced transparency and traceability. This is achieved by coherently linking feature representations to attributions derived from analyzing the training data, enabling direct attribution assignment to features in (unseen) images. We evaluate our methodology using two satellite-based land cover classification datasets, three convolutional neural network architectures, and nine attribution methods. Harmonizing attributions increases the Pearson correlation coefficient between different attribution methods by an average of 0.18 across all datasets, models, and methods; and improves the micro F1-score — a measure of accuracy — by 12%. We demonstrate that Grad-CAM attributions are inherently well-aligned with the features, whereas other gradient-based attribution methods exhibit significant noise, mitigated through harmonization. It further enhances the resolution of occlusion-based attribution maps and adjusts misleading explanations.
format Article
id doaj-art-d810e8773a1f48ec847fbc73d2cb86d8
institution DOAJ
issn 2666-8270
language English
publishDate 2025-06-01
publisher Elsevier
record_format Article
series Machine Learning with Applications
spelling doaj-art-d810e8773a1f48ec847fbc73d2cb86d82025-08-20T02:39:35ZengElsevierMachine Learning with Applications2666-82702025-06-012010065310.1016/j.mlwa.2025.100653Building consistency in explanations: Harmonizing CNN attributions for satellite-based land cover classificationTimo T. Stomberg0Lennart A. Reißner1Martin G. Schultz2Ribana Roscher3Institute of Geodesy and Geoinformation, University of Bonn, Niebuhrstraße 1a, Bonn, 53113, Germany; Corresponding author.Institute of Geodesy and Geoinformation, University of Bonn, Niebuhrstraße 1a, Bonn, 53113, GermanyJülich Supercomputing Centre, Forschungszentrum Jülich, Wilhelm-Johnen-Straße, Jülich, 52428, Germany; Department of Computer Science, University of Cologne, Weyertal 86-90, Cologne, 50931, GermanyInstitute of Geodesy and Geoinformation, University of Bonn, Niebuhrstraße 1a, Bonn, 53113, Germany; Institute of Bio- and Geosciences, Forschungszentrum Jülich, Wilhelm-Johnen-Straße, Jülich, 52428, GermanyExplainable machine learning has gained substantial attention for its role in enhancing transparency and trust in computer vision applications. Attribution methods like Grad-CAM and occlusion sensitivity analysis are frequently used to identify how features contribute to predictions of neural networks. However, a key challenge is that different attribution methods often produce different outcomes undermining trust in their results. Furthermore, the unique characteristics of remote sensing imagery pose additional challenges for attribution interpretation: it primarily comprises continuous “stuff” classes rather than objects, exhibits fine-grained spatial variability, contains mixed pixels, is often multispectral, and exhibits spatially heterogeneity. To tackle this challenge, we present a novel methodology that harmonizes attributions, resulting in: 1. greater consistency across different attribution methods; 2. more meaningful explanations when validated against known segmentation ground truth; and 3. enhanced transparency and traceability. This is achieved by coherently linking feature representations to attributions derived from analyzing the training data, enabling direct attribution assignment to features in (unseen) images. We evaluate our methodology using two satellite-based land cover classification datasets, three convolutional neural network architectures, and nine attribution methods. Harmonizing attributions increases the Pearson correlation coefficient between different attribution methods by an average of 0.18 across all datasets, models, and methods; and improves the micro F1-score — a measure of accuracy — by 12%. We demonstrate that Grad-CAM attributions are inherently well-aligned with the features, whereas other gradient-based attribution methods exhibit significant noise, mitigated through harmonization. It further enhances the resolution of occlusion-based attribution maps and adjusts misleading explanations.http://www.sciencedirect.com/science/article/pii/S2666827025000362Explainable machine learningAttribution methodsFeature representationsFeature spaceLand coverSatellite imagery
spellingShingle Timo T. Stomberg
Lennart A. Reißner
Martin G. Schultz
Ribana Roscher
Building consistency in explanations: Harmonizing CNN attributions for satellite-based land cover classification
Machine Learning with Applications
Explainable machine learning
Attribution methods
Feature representations
Feature space
Land cover
Satellite imagery
title Building consistency in explanations: Harmonizing CNN attributions for satellite-based land cover classification
title_full Building consistency in explanations: Harmonizing CNN attributions for satellite-based land cover classification
title_fullStr Building consistency in explanations: Harmonizing CNN attributions for satellite-based land cover classification
title_full_unstemmed Building consistency in explanations: Harmonizing CNN attributions for satellite-based land cover classification
title_short Building consistency in explanations: Harmonizing CNN attributions for satellite-based land cover classification
title_sort building consistency in explanations harmonizing cnn attributions for satellite based land cover classification
topic Explainable machine learning
Attribution methods
Feature representations
Feature space
Land cover
Satellite imagery
url http://www.sciencedirect.com/science/article/pii/S2666827025000362
work_keys_str_mv AT timotstomberg buildingconsistencyinexplanationsharmonizingcnnattributionsforsatellitebasedlandcoverclassification
AT lennartareißner buildingconsistencyinexplanationsharmonizingcnnattributionsforsatellitebasedlandcoverclassification
AT martingschultz buildingconsistencyinexplanationsharmonizingcnnattributionsforsatellitebasedlandcoverclassification
AT ribanaroscher buildingconsistencyinexplanationsharmonizingcnnattributionsforsatellitebasedlandcoverclassification