Compression-Aware Hybrid Framework for Deep Fake Detection in Low-Quality Video

Deep fakes pose a growing threat to digital media integrity by generating highly realistic fake videos that are difficult to detect, especially under the high compression levels commonly used on social media platforms. These compression artifacts often degrade the performance of deep fake detectors,...

Full description

Saved in:
Bibliographic Details
Main Authors: Lagsoun Abdel Motalib, Oujaoura Mustapha, Hedabou Mustapha
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11095666/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep fakes pose a growing threat to digital media integrity by generating highly realistic fake videos that are difficult to detect, especially under the high compression levels commonly used on social media platforms. These compression artifacts often degrade the performance of deep fake detectors, making reliable detection even more challenging. In this paper, we propose a handcrafted deep fake detection framework that integrates wavelet transforms and Conv3D-based spatiotemporal descriptors for feature extraction, followed by a lightweight ResNet-inspired classifier. Unlike end-to-end deep neural networks, our method emphasizes interpretability and computational efficiency, while maintaining high detection accuracy under diverse real-world conditions. We evaluated four configurations based on input modality and attention mechanisms: RGB with attention, RGB without attention, grayscale with attention, and grayscale without attention. Experiments were conducted on the FaceForensics++ dataset (C23 and C40 compression levels) and Celeb-DF v2 (C0 and C40), across intra- and inter-compression settings, as well as cross-dataset scenarios. Results show that RGB inputs without attention achieve the highest accuracy on FaceForensics++, while grayscale inputs without attention perform best in cross-dataset evaluations on Celeb-DF v2, attaining strong AUC scores. Despite its handcrafted nature, our approach matches or surpasses the existing state-of-the-art (SOTA) methods. Grad-CAM visualizations further reveal both strengths and failures (e.g., occlusion and misalignment), offering valuable insights for refinement. These findings underscore the potential of our framework for efficient and effective deep fake detection in low-resource and real-time environments.
ISSN:2169-3536