Human Performance in Deepfake Detection: A Systematic Review
Deepfakes refer to a wide range of computer-generated synthetic media, in which a person’s appearance or likeness is altered to resemble that of another. This systematic review is aimed at providing an overview of the existing research into people’s ability to detect deepfakes. Five databases (IEEE,...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Wiley
2025-01-01
|
| Series: | Human Behavior and Emerging Technologies |
| Online Access: | http://dx.doi.org/10.1155/hbe2/1833228 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850032744712634368 |
|---|---|
| author | Klaire Somoray Dan J. Miller Mary Holmes |
| author_facet | Klaire Somoray Dan J. Miller Mary Holmes |
| author_sort | Klaire Somoray |
| collection | DOAJ |
| description | Deepfakes refer to a wide range of computer-generated synthetic media, in which a person’s appearance or likeness is altered to resemble that of another. This systematic review is aimed at providing an overview of the existing research into people’s ability to detect deepfakes. Five databases (IEEE, ProQuest, PubMed, Web of Science, and Scopus) were searched up to December 2023. Studies were included if they (1) were an original study; (2) were reported in English; (3) examined people’s detection of deepfakes; (4) examined the influence of an intervention, strategy, or variable on deepfake detection; and (5) reported relevant data needed to evaluate detection accuracy. Forty independent studies from 30 unique records were included in the review. Results were narratively summarized, with key findings organized based on the review’s research questions. Studies used different performance measures, making it difficult to compare results across the literature. Detection accuracy varies widely, with some studies showing humans outperforming AI models and others indicating the opposite. Detection performance is also influenced by person-level (e.g., cognitive ability, analytical thinking) and stimuli-level factors (e.g., quality of deepfake, familiarity with the subject). Interventions to improve people’s deepfake detection yielded mixed results. Humans and AI-based detection models focus on different aspects when detecting, suggesting a potential for human–AI collaboration. The findings highlight the complex interplay of factors influencing human deepfake detection and the need for further research to develop effective strategies for deepfake detection. |
| format | Article |
| id | doaj-art-4dac2c82aaf54b88bf3d50ba9520a877 |
| institution | DOAJ |
| issn | 2578-1863 |
| language | English |
| publishDate | 2025-01-01 |
| publisher | Wiley |
| record_format | Article |
| series | Human Behavior and Emerging Technologies |
| spelling | doaj-art-4dac2c82aaf54b88bf3d50ba9520a8772025-08-20T02:58:32ZengWileyHuman Behavior and Emerging Technologies2578-18632025-01-01202510.1155/hbe2/1833228Human Performance in Deepfake Detection: A Systematic ReviewKlaire Somoray0Dan J. Miller1Mary Holmes2James Cook University College of Healthcare SciencesJames Cook University College of Healthcare SciencesJames Cook University College of Healthcare SciencesDeepfakes refer to a wide range of computer-generated synthetic media, in which a person’s appearance or likeness is altered to resemble that of another. This systematic review is aimed at providing an overview of the existing research into people’s ability to detect deepfakes. Five databases (IEEE, ProQuest, PubMed, Web of Science, and Scopus) were searched up to December 2023. Studies were included if they (1) were an original study; (2) were reported in English; (3) examined people’s detection of deepfakes; (4) examined the influence of an intervention, strategy, or variable on deepfake detection; and (5) reported relevant data needed to evaluate detection accuracy. Forty independent studies from 30 unique records were included in the review. Results were narratively summarized, with key findings organized based on the review’s research questions. Studies used different performance measures, making it difficult to compare results across the literature. Detection accuracy varies widely, with some studies showing humans outperforming AI models and others indicating the opposite. Detection performance is also influenced by person-level (e.g., cognitive ability, analytical thinking) and stimuli-level factors (e.g., quality of deepfake, familiarity with the subject). Interventions to improve people’s deepfake detection yielded mixed results. Humans and AI-based detection models focus on different aspects when detecting, suggesting a potential for human–AI collaboration. The findings highlight the complex interplay of factors influencing human deepfake detection and the need for further research to develop effective strategies for deepfake detection.http://dx.doi.org/10.1155/hbe2/1833228 |
| spellingShingle | Klaire Somoray Dan J. Miller Mary Holmes Human Performance in Deepfake Detection: A Systematic Review Human Behavior and Emerging Technologies |
| title | Human Performance in Deepfake Detection: A Systematic Review |
| title_full | Human Performance in Deepfake Detection: A Systematic Review |
| title_fullStr | Human Performance in Deepfake Detection: A Systematic Review |
| title_full_unstemmed | Human Performance in Deepfake Detection: A Systematic Review |
| title_short | Human Performance in Deepfake Detection: A Systematic Review |
| title_sort | human performance in deepfake detection a systematic review |
| url | http://dx.doi.org/10.1155/hbe2/1833228 |
| work_keys_str_mv | AT klairesomoray humanperformanceindeepfakedetectionasystematicreview AT danjmiller humanperformanceindeepfakedetectionasystematicreview AT maryholmes humanperformanceindeepfakedetectionasystematicreview |