Seg-Eigen-CAM: Eigen-Value-Based Visual Explanations for Semantic Segmentation Models
In recent years, most Explainable Artificial Intelligence methods have primarily focused on image classification. Although research on interpretability in image segmentation has been increasing, it remains relatively limited. As an extension of Grad-CAM, several methods have been proposed and applie...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-07-01
|
| Series: | Applied Sciences |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2076-3417/15/13/7562 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | In recent years, most Explainable Artificial Intelligence methods have primarily focused on image classification. Although research on interpretability in image segmentation has been increasing, it remains relatively limited. As an extension of Grad-CAM, several methods have been proposed and applied to image segmentation with the aim of enhancing existing techniques and adapting their properties. However, in this study, we highlight a common issue with gradient-based methods when generating visual explanations—these methods tend to emphasize background information, resulting in significant noise, especially when dealing with image segmentation tasks involving complex or cluttered backgrounds. Inspired by the widely used Eigen-CAM method, this study proposes a novel explainability approach tailored for semantic segmentation. By integrating gradient information and introducing a sign correction strategy, our method enhances spatial localization and reduces background noise, particularly in complex scenes. Through empirical studies, we compare our method with several representative methods, employing multiple evaluation metrics to quantify explainability and validate the advantages of our method. Overall, this study advances explainability methods for convolutional neural networks in semantic segmentation. Our approach not only preserves localized attention but also offers a simpler and more intuitive CAM, which has the potential to play a crucial role in sensitive application scenarios, fostering the development of trustworthy AI models. |
|---|---|
| ISSN: | 2076-3417 |