Learning deep forest for face anti-spoofing: An alternative to the neural network against adversarial attacks

Face anti-spoofing (FAS) is significant for the security of face recognition systems. neural networks (NNs), including convolutional neural network (CNN) and vision transformer (ViT), have been dominating the field of the FAS. However, NN-based methods are vulnerable to adversarial attacks. Attacker...

Full description

Saved in:
Bibliographic Details
Main Authors: Rizhao Cai, Liepiao Zhang, Changsheng Chen, Yongjian Hu, Alex Kot
Format: Article
Language:English
Published: AIMS Press 2024-10-01
Series:Electronic Research Archive
Subjects:
Online Access:https://www.aimspress.com/article/doi/10.3934/era.2024259
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832590730682433536
author Rizhao Cai
Liepiao Zhang
Changsheng Chen
Yongjian Hu
Alex Kot
author_facet Rizhao Cai
Liepiao Zhang
Changsheng Chen
Yongjian Hu
Alex Kot
author_sort Rizhao Cai
collection DOAJ
description Face anti-spoofing (FAS) is significant for the security of face recognition systems. neural networks (NNs), including convolutional neural network (CNN) and vision transformer (ViT), have been dominating the field of the FAS. However, NN-based methods are vulnerable to adversarial attacks. Attackers could insert adversarial noise into spoofing examples to circumvent an NN-based face-liveness detector. Our experiments show that the CNN or ViT models could have at least an 8% equal error rate (EER) increment when encountering adversarial examples. Thus, developing methods other than NNs is worth exploring to improve security at the system level. In this paper, we have proposed a novel solution for FAS against adversarial attacks, leveraging a deep forest model. Our approach introduces a multi-scale texture representation based on local binary patterns (LBP) as the model input, replacing the grained-scanning mechanism (GSM) used in the traditional deep forest model. Unlike GSM, which scans raw pixels and lacks discriminative power, our LBP-based scheme is specifically designed to capture texture features relevant to spoofing detection. Additionally, transforming the input from the RGB space to the LBP space enhances robustness against adversarial noise. Our method achieved competitive results. When testing with adversarial examples, the increment of EER was less than 3%, more robust than CNN and ViT. On the benchmark database IDIAP REPLAY-ATTACK, a 0% EER was achieved. This work provides a competitive option in a fusing scheme for improving system-level security and offers important ideas to those who want to explore methods besides CNNs. To the best of our knowledge, this is the first attempt at exploiting the deep forest model in the problem of FAS, with the consideration of adversarial attacks.
format Article
id doaj-art-28fcd4da44044fa5ae69ffab1cc93697
institution Kabale University
issn 2688-1594
language English
publishDate 2024-10-01
publisher AIMS Press
record_format Article
series Electronic Research Archive
spelling doaj-art-28fcd4da44044fa5ae69ffab1cc936972025-01-23T07:52:52ZengAIMS PressElectronic Research Archive2688-15942024-10-0132105592561410.3934/era.2024259Learning deep forest for face anti-spoofing: An alternative to the neural network against adversarial attacksRizhao Cai0Liepiao Zhang1Changsheng Chen2Yongjian Hu3Alex Kot4School of Electrical and Electronic Engineering, Nanyang Technological University, 639798, SingaporeGRGTally-vision I.T. Co., Ltd., Guangzhou 510663, ChinaCollege of Electronics and Information Engineering, Shenzhen University, Shenzhen 518061, ChinaSchool of Electronic and Information Engineering, South China University of Technology, Guangzhou 510641, ChinaSchool of Electrical and Electronic Engineering, Nanyang Technological University, 639798, SingaporeFace anti-spoofing (FAS) is significant for the security of face recognition systems. neural networks (NNs), including convolutional neural network (CNN) and vision transformer (ViT), have been dominating the field of the FAS. However, NN-based methods are vulnerable to adversarial attacks. Attackers could insert adversarial noise into spoofing examples to circumvent an NN-based face-liveness detector. Our experiments show that the CNN or ViT models could have at least an 8% equal error rate (EER) increment when encountering adversarial examples. Thus, developing methods other than NNs is worth exploring to improve security at the system level. In this paper, we have proposed a novel solution for FAS against adversarial attacks, leveraging a deep forest model. Our approach introduces a multi-scale texture representation based on local binary patterns (LBP) as the model input, replacing the grained-scanning mechanism (GSM) used in the traditional deep forest model. Unlike GSM, which scans raw pixels and lacks discriminative power, our LBP-based scheme is specifically designed to capture texture features relevant to spoofing detection. Additionally, transforming the input from the RGB space to the LBP space enhances robustness against adversarial noise. Our method achieved competitive results. When testing with adversarial examples, the increment of EER was less than 3%, more robust than CNN and ViT. On the benchmark database IDIAP REPLAY-ATTACK, a 0% EER was achieved. This work provides a competitive option in a fusing scheme for improving system-level security and offers important ideas to those who want to explore methods besides CNNs. To the best of our knowledge, this is the first attempt at exploiting the deep forest model in the problem of FAS, with the consideration of adversarial attacks.https://www.aimspress.com/article/doi/10.3934/era.2024259deep forestadversarial attacksface anti-spoofing
spellingShingle Rizhao Cai
Liepiao Zhang
Changsheng Chen
Yongjian Hu
Alex Kot
Learning deep forest for face anti-spoofing: An alternative to the neural network against adversarial attacks
Electronic Research Archive
deep forest
adversarial attacks
face anti-spoofing
title Learning deep forest for face anti-spoofing: An alternative to the neural network against adversarial attacks
title_full Learning deep forest for face anti-spoofing: An alternative to the neural network against adversarial attacks
title_fullStr Learning deep forest for face anti-spoofing: An alternative to the neural network against adversarial attacks
title_full_unstemmed Learning deep forest for face anti-spoofing: An alternative to the neural network against adversarial attacks
title_short Learning deep forest for face anti-spoofing: An alternative to the neural network against adversarial attacks
title_sort learning deep forest for face anti spoofing an alternative to the neural network against adversarial attacks
topic deep forest
adversarial attacks
face anti-spoofing
url https://www.aimspress.com/article/doi/10.3934/era.2024259
work_keys_str_mv AT rizhaocai learningdeepforestforfaceantispoofinganalternativetotheneuralnetworkagainstadversarialattacks
AT liepiaozhang learningdeepforestforfaceantispoofinganalternativetotheneuralnetworkagainstadversarialattacks
AT changshengchen learningdeepforestforfaceantispoofinganalternativetotheneuralnetworkagainstadversarialattacks
AT yongjianhu learningdeepforestforfaceantispoofinganalternativetotheneuralnetworkagainstadversarialattacks
AT alexkot learningdeepforestforfaceantispoofinganalternativetotheneuralnetworkagainstadversarialattacks