Adversarial detection based on feature invariant in license plate recognition systems
Deep neural networks have become an integral part of people's daily lives. However, researchers observed that these networks were susceptible to threats from adversarial samples, leading to abnormal behaviors such as misclassification by the network model. The presence of adversarial samples po...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
POSTS&TELECOM PRESS Co., LTD
2024-12-01
|
Series: | 网络与信息安全学报 |
Subjects: | |
Online Access: | http://www.cjnis.com.cn/thesisDetails#10.11959/j.issn.2096-109x.2024080 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Deep neural networks have become an integral part of people's daily lives. However, researchers observed that these networks were susceptible to threats from adversarial samples, leading to abnormal behaviors such as misclassification by the network model. The presence of adversarial samples poses a significant threat to the application of deep neural networks, especially in security-sensitive scenarios like license plate recognition systems. Presently, most existing defense and detection technologies against adversarial samples show promising results for specific types of adversarial attacks. However, they often lack generality in addressing all types of adversarial attacks. In response to adversarial sample attacks on real-world license plate recognition systems, an unsupervised adversarial sample detection system named FIAD was proposed, which was based on analyzing the inherent variations in neural networks trained on clean samples and the dimensional complexity between clean samples. FIAD utilized neural network invariants and local intrinsic dimensionality invariants for effective sample detection. The detection system was deployed into widely used open-source license plate recognition systems, HyperLPR and EasyPR, and extensive experiments were conducted using the real dataset CCPD. The results from experiments involving 11 different types of attacks indicate that, compared to 4 other advanced detection methods, FIAD can effectively detect all these attacks at a lower false positive rate, with an accuracy consistently reaching 99%. Therefore, FIAD exhibits good generality against various types of adversarial attacks. |
---|---|
ISSN: | 2096-109X |