Human Performance in Deepfake Detection: A Systematic Review

Deepfakes refer to a wide range of computer-generated synthetic media, in which a person’s appearance or likeness is altered to resemble that of another. This systematic review is aimed at providing an overview of the existing research into people’s ability to detect deepfakes. Five databases (IEEE,...

Full description

Saved in:
Bibliographic Details
Main Authors: Klaire Somoray, Dan J. Miller, Mary Holmes
Format: Article
Language:English
Published: Wiley 2025-01-01
Series:Human Behavior and Emerging Technologies
Online Access:http://dx.doi.org/10.1155/hbe2/1833228
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deepfakes refer to a wide range of computer-generated synthetic media, in which a person’s appearance or likeness is altered to resemble that of another. This systematic review is aimed at providing an overview of the existing research into people’s ability to detect deepfakes. Five databases (IEEE, ProQuest, PubMed, Web of Science, and Scopus) were searched up to December 2023. Studies were included if they (1) were an original study; (2) were reported in English; (3) examined people’s detection of deepfakes; (4) examined the influence of an intervention, strategy, or variable on deepfake detection; and (5) reported relevant data needed to evaluate detection accuracy. Forty independent studies from 30 unique records were included in the review. Results were narratively summarized, with key findings organized based on the review’s research questions. Studies used different performance measures, making it difficult to compare results across the literature. Detection accuracy varies widely, with some studies showing humans outperforming AI models and others indicating the opposite. Detection performance is also influenced by person-level (e.g., cognitive ability, analytical thinking) and stimuli-level factors (e.g., quality of deepfake, familiarity with the subject). Interventions to improve people’s deepfake detection yielded mixed results. Humans and AI-based detection models focus on different aspects when detecting, suggesting a potential for human–AI collaboration. The findings highlight the complex interplay of factors influencing human deepfake detection and the need for further research to develop effective strategies for deepfake detection.
ISSN:2578-1863