OD-SHIELD: Convolutional Autoencoder-Based Defense Against Adversarial Patch Attacks in Object Detection
In the evolving landscape of deep neural network security, adversarial patch attacks present a serious challenge for object detection systems. We introduce <sc>OD-Shield</sc>, a novel defense approach that employs a convolutional autoencoder framework to detect and remove anomalous regio...
Saved in:
| Main Authors: | , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11021559/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | In the evolving landscape of deep neural network security, adversarial patch attacks present a serious challenge for object detection systems. We introduce <sc>OD-Shield</sc>, a novel defense approach that employs a convolutional autoencoder framework to detect and remove anomalous regions in patched images, subsequently reconstructing these regions to mitigate adversarial effects. The reconstructed images are then processed by the object detector, thereby restoring reliable performance under diverse attack scenarios. Distinctly model-agnostic, <sc>OD-Shield</sc> operates as a pre-processing step and can be applied to a wide range of tasks—including image classification and object detection—without compromising the fidelity of the original image. Experiments on benchmark datasets (COCO, Visdrone, and Argoverse) reveal that <sc>OD-Shield</sc> outperforms existing defenses by 13%–47% on COCO, highlighting its effectiveness in addressing critical security vulnerabilities. This work not only tackles the immediate threat of adversarial patches but also lays the foundation for future research into adaptive, resilient defense mechanisms that keep pace with evolving adversarial tactics. |
|---|---|
| ISSN: | 2169-3536 |