Transferable Targeted Adversarial Attack on Synthetic Aperture Radar (SAR) Image Recognition
Deep learning models have been widely applied to synthetic aperture radar (SAR) target recognition, offering end-to-end feature extraction that significantly enhances recognition performance. However, recent studies show that optical image recognition models are widely vulnerable to adversarial exam...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Remote Sensing |
Subjects: | |
Online Access: | https://www.mdpi.com/2072-4292/17/1/146 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Deep learning models have been widely applied to synthetic aperture radar (SAR) target recognition, offering end-to-end feature extraction that significantly enhances recognition performance. However, recent studies show that optical image recognition models are widely vulnerable to adversarial examples, which fool the models by adding imperceptible perturbation to the input. Although the targeted adversarial attack (TAA) has been realized in the white box setup with full access to the SAR model’s knowledge, it is less practical in real-world scenarios where white box access to the target model is not allowed. To the best of our knowledge, our work is the first to explore transferable TAA on SAR models. Since contrastive learning (CL) is commonly applied to enhance a model’s generalization, we utilize it to improve the generalization of adversarial examples generated on a source model to unseen target models in the black box scenario. Thus, we propose the contrastive learning-based targeted adversarial attack, termed CL-TAA. Extensive experiments demonstrated that our proposed CL-TAA can significantly improve the transferability of adversarial examples to fool the SAR models in the black box scenario. |
---|---|
ISSN: | 2072-4292 |