Building consistency in explanations: Harmonizing CNN attributions for satellite-based land cover classification
Explainable machine learning has gained substantial attention for its role in enhancing transparency and trust in computer vision applications. Attribution methods like Grad-CAM and occlusion sensitivity analysis are frequently used to identify how features contribute to predictions of neural networ...
Saved in:
| Main Authors: | Timo T. Stomberg, Lennart A. Reißner, Martin G. Schultz, Ribana Roscher |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Elsevier
2025-06-01
|
| Series: | Machine Learning with Applications |
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S2666827025000362 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Explainability of Subfield Level Crop Yield Prediction Using Remote Sensing
by: Hiba Najjar, et al.
Published: (2025-01-01) -
From Prediction to Explanation: Using Explainable AI to Understand Satellite-Based Riot Forecasting Models
by: Scott Warnke, et al.
Published: (2025-01-01) -
Enhancing Neural Network Interpretability Through Deep Prior-Guided Expected Gradients
by: Su-Ying Guo, et al.
Published: (2025-06-01) -
Attribute Relevance Score: A Novel Measure for Identifying Attribute Importance
by: Pablo Neirz, et al.
Published: (2024-11-01) -
Class Activation Map Guided Backpropagation for Discriminative Explanations
by: Yongjie Liu, et al.
Published: (2025-01-01)