Analyzing the Adversarial Robustness and Interpretability of Deep SAR Classification Models: A Comprehensive Examination of Their Reliability
Deep neural networks (DNNs) have shown strong performance in synthetic aperture radar (SAR) image classification. However, their “black-box” nature limits interpretability and poses challenges for robustness, which is critical for sensitive applications such as disaster assessment, environmental mon...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-06-01
|
| Series: | Remote Sensing |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2072-4292/17/11/1943 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849721863485259776 |
|---|---|
| author | Tianrui Chen Limeng Zhang Weiwei Guo Zenghui Zhang Mihai Datcu |
| author_facet | Tianrui Chen Limeng Zhang Weiwei Guo Zenghui Zhang Mihai Datcu |
| author_sort | Tianrui Chen |
| collection | DOAJ |
| description | Deep neural networks (DNNs) have shown strong performance in synthetic aperture radar (SAR) image classification. However, their “black-box” nature limits interpretability and poses challenges for robustness, which is critical for sensitive applications such as disaster assessment, environmental monitoring, and agricultural insurance. This study systematically evaluates the adversarial robustness of five representative DNNs (VGG11/16, ResNet18/101, and A-ConvNet) under a variety of attack and defense settings. Using eXplainable AI (XAI) techniques and attribution-based visualizations, we analyze how adversarial perturbations and adversarial training affect model behavior and decision logic. Our results reveal significant robustness differences across architectures, highlight interpretability limitations, and suggest practical guidelines for building more robust SAR classification systems. We also discuss challenges associated with large-scale, multi-class land use and land cover (LULC) classification under adversarial conditions. |
| format | Article |
| id | doaj-art-16ee39dee6054916aa9b734af2bbbd7f |
| institution | DOAJ |
| issn | 2072-4292 |
| language | English |
| publishDate | 2025-06-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Remote Sensing |
| spelling | doaj-art-16ee39dee6054916aa9b734af2bbbd7f2025-08-20T03:11:32ZengMDPI AGRemote Sensing2072-42922025-06-011711194310.3390/rs17111943Analyzing the Adversarial Robustness and Interpretability of Deep SAR Classification Models: A Comprehensive Examination of Their ReliabilityTianrui Chen0Limeng Zhang1Weiwei Guo2Zenghui Zhang3Mihai Datcu4Shanghai Key Laboratory of Intelligent Sensing and Recognition, Shanghai Jiao Tong University, Shanghai 200240, ChinaShanghai Key Laboratory of Intelligent Sensing and Recognition, Shanghai Jiao Tong University, Shanghai 200240, ChinaCenter of Digital Innovation, Tongji University, Shanghai 200092, ChinaShanghai Key Laboratory of Intelligent Sensing and Recognition, Shanghai Jiao Tong University, Shanghai 200240, ChinaResearch Center for Spatial Information (CEOSpaceTech), POLITEHNICA Bucharest, Bucharest 011061, RomaniaDeep neural networks (DNNs) have shown strong performance in synthetic aperture radar (SAR) image classification. However, their “black-box” nature limits interpretability and poses challenges for robustness, which is critical for sensitive applications such as disaster assessment, environmental monitoring, and agricultural insurance. This study systematically evaluates the adversarial robustness of five representative DNNs (VGG11/16, ResNet18/101, and A-ConvNet) under a variety of attack and defense settings. Using eXplainable AI (XAI) techniques and attribution-based visualizations, we analyze how adversarial perturbations and adversarial training affect model behavior and decision logic. Our results reveal significant robustness differences across architectures, highlight interpretability limitations, and suggest practical guidelines for building more robust SAR classification systems. We also discuss challenges associated with large-scale, multi-class land use and land cover (LULC) classification under adversarial conditions.https://www.mdpi.com/2072-4292/17/11/1943synthetic aperture radar (SAR)image classificationdeep learningadversarial exampleexplainable artificial intelligence |
| spellingShingle | Tianrui Chen Limeng Zhang Weiwei Guo Zenghui Zhang Mihai Datcu Analyzing the Adversarial Robustness and Interpretability of Deep SAR Classification Models: A Comprehensive Examination of Their Reliability Remote Sensing synthetic aperture radar (SAR) image classification deep learning adversarial example explainable artificial intelligence |
| title | Analyzing the Adversarial Robustness and Interpretability of Deep SAR Classification Models: A Comprehensive Examination of Their Reliability |
| title_full | Analyzing the Adversarial Robustness and Interpretability of Deep SAR Classification Models: A Comprehensive Examination of Their Reliability |
| title_fullStr | Analyzing the Adversarial Robustness and Interpretability of Deep SAR Classification Models: A Comprehensive Examination of Their Reliability |
| title_full_unstemmed | Analyzing the Adversarial Robustness and Interpretability of Deep SAR Classification Models: A Comprehensive Examination of Their Reliability |
| title_short | Analyzing the Adversarial Robustness and Interpretability of Deep SAR Classification Models: A Comprehensive Examination of Their Reliability |
| title_sort | analyzing the adversarial robustness and interpretability of deep sar classification models a comprehensive examination of their reliability |
| topic | synthetic aperture radar (SAR) image classification deep learning adversarial example explainable artificial intelligence |
| url | https://www.mdpi.com/2072-4292/17/11/1943 |
| work_keys_str_mv | AT tianruichen analyzingtheadversarialrobustnessandinterpretabilityofdeepsarclassificationmodelsacomprehensiveexaminationoftheirreliability AT limengzhang analyzingtheadversarialrobustnessandinterpretabilityofdeepsarclassificationmodelsacomprehensiveexaminationoftheirreliability AT weiweiguo analyzingtheadversarialrobustnessandinterpretabilityofdeepsarclassificationmodelsacomprehensiveexaminationoftheirreliability AT zenghuizhang analyzingtheadversarialrobustnessandinterpretabilityofdeepsarclassificationmodelsacomprehensiveexaminationoftheirreliability AT mihaidatcu analyzingtheadversarialrobustnessandinterpretabilityofdeepsarclassificationmodelsacomprehensiveexaminationoftheirreliability |