Visual Deepfake Detection: Review of Techniques, Tools, Limitations, and Future Prospects

In recent years, rapid advancements in deepfakes (incorporating Artificial Intelligence (AI), machine, and deep learning) have updated tools and techniques for manipulating multimedia. Though technology has primarily been utilized for beneficial purposes, such as education and entertainment, it is a...

Full description

Saved in:
Bibliographic Details
Main Authors: Naveed Ur Rehman Ahmed, Afzal Badshah, Hanan Adeel, Ayesha Tajammul, Ali Daud, Tariq Alsahfi
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10816641/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In recent years, rapid advancements in deepfakes (incorporating Artificial Intelligence (AI), machine, and deep learning) have updated tools and techniques for manipulating multimedia. Though technology has primarily been utilized for beneficial purposes, such as education and entertainment, it is also used for malicious or unethical tasks to spread disinformation or ruin someone’s dignity, even if it encompasses harassing and blackmailing victims. Deepfakes refer to high-quality and realistic multimedia-manipulated content that has been digitally modified or synthetically generated. We conducted a systematic literature review of deepfake detection to offer an updated overview of existing research work that initially describes the widely accessible deepfake generation tools, classifications, and detection process. We highlighted recent techniques in visual deepfake detection based on the feature representations, grouped into four domains: spatial, temporal, frequency, and spatio-temporal, including their key features and limitations by providing details of existing datasets, together with the potentials of deepfake and its future directions. This study tried to add an updated repository of technological change in deepfake, which could help researchers to develop robust deepfake models.
ISSN:2169-3536