Unmasking the Fake: Machine Learning Approach for Deepfake Voice Detection

Deepfake voice refers to artificially generated or manipulated audio that mimics a person’s voice, often created using advanced AI techniques. These synthetic voices can be used to convincingly imitate someone, making them nearly indistinguishable from genuine recordings. We present an ad...

Full description

Saved in:
Bibliographic Details
Main Authors: Muhammad Usama Tanveer Gujjar, Kashif Munir, Madiha Amjad, Atiq Ur Rehman, Amine Bermak
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10811921/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deepfake voice refers to artificially generated or manipulated audio that mimics a person’s voice, often created using advanced AI techniques. These synthetic voices can be used to convincingly imitate someone, making them nearly indistinguishable from genuine recordings. We present an advanced method for deepfake voice detection, leveraging a custom model named MFCC-GNB XtractNet. By extracting Mel-Frequency Cepstral Coefficients (MFCC) from audio samples which serve as the foundational features for identifying genuine and fake voices. These MFCC features are then enhanced through a transformation process that employs a Gaussian Naive Bayes (GNB) model in conjunction with Non-Negative Factorization, creating a more discriminative feature set for subsequent analysis. These features are fed to our developed model, MFCC-GNB XtractNet to identify deep fake voice.To rigorously evaluate the effectiveness of our approach, we deployed a range of machine learning models, including Random Forest (RF), K-Nearest Neighbors Classifier (KNC), Logistic Regression (LR) and Gaussian Naive Bayes (GNB). Each model’s performance is assessed through k-fold cross-validation, ensuring a robust evaluation across multiple data splits. Additionally, we performed a computational cost analysis to measure the efficiency of the models in terms of training time and resource usage. The results of our experiments were highly promising, with our MFCC-GNB XtractNet + GNB model achieving an impressive accuracy score of 99.93%. This exceptional performance underscores the model’s ability to effectively distinguish between real and deepfake voices setting a new benchmark in the field of voice authentication.
ISSN:2169-3536