Meta-Learning With Relation Embedding for Few-Shot Deepfake Detection

The generation of facial images via generative models has gained significant popularity, while the task of discriminating between authentic and synthetic faces has proven to be increasingly challenging. This challenge is exacerbated when novel generative models emerge, as it is difficult to obtain a...

Full description

Saved in:
Bibliographic Details
Main Authors: Xiaoyong Liu, Pengcheng Song, Pei Lu, Yanjun Wang
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10754643/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The generation of facial images via generative models has gained significant popularity, while the task of discriminating between authentic and synthetic faces has proven to be increasingly challenging. This challenge is exacerbated when novel generative models emerge, as it is difficult to obtain a substantial number of images from these new models and the limited number of samples can undermine the accuracy of training. To tackle these issues, we introduce a few-shot deepfake detection approach based on meta-learning with relation embedding. Initially, we employ an embedding function to generate feature representations of the images. Subsequently, we convert the basic representations of feature maps into their corresponding self-correlation tensors, enabling us to learn the structural patterns inherent in these tensors. Finally, we utilize a learnable metric to classify the self-correlation tensors. Our model is trained using an initialization parameter meta-learning strategy, extracting generalizable knowledge through training on multiple interrelated tasks, thereby enhancing model performance. The effectiveness of our approach has been validated through experiments on the miniImageNet, Stanford-Dogs, and CUB-200-2011 datasets. Additionally, we conducted tests on a self-constructed deepfake face dataset, and the results indicated that the proposed method exhibits strong performance compared with other methods.
ISSN:2169-3536