Video-Based Facial Emotion Recognition using YOLO and Vision Transformer

This paper presents a video-based FER approach using a combination of the YOLOv8 model for face detection and a pre-trained Vision Transformer (ViT) for emotion classification. Our methodology involves extracting the middle frame from the RAVDESS dataset, which is then used for face detection using...

Full description

Saved in:
Bibliographic Details
Main Authors: Sareen Vidhi, Seeja K.R.
Format: Article
Language:English
Published: EDP Sciences 2025-01-01
Series:EPJ Web of Conferences
Subjects:
Online Access:https://www.epj-conferences.org/articles/epjconf/pdf/2025/13/epjconf_icetsf2025_01040.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper presents a video-based FER approach using a combination of the YOLOv8 model for face detection and a pre-trained Vision Transformer (ViT) for emotion classification. Our methodology involves extracting the middle frame from the RAVDESS dataset, which is then used for face detection using the YOLOv8 algorithm. The detected facial region is then processed through the Vit model to classify emotions into seven categories like Neutral, Happy, Sad, Angry, Fearful, Disgust, and Surprised. To enhance model robustness and generalization, data augmentation techniques such as horizontal flipping, brightness adjustment, and Gaussian noise injection were applied during preprocessing. The combination of precise face localization by YOLOv8 and powerful feature extraction of ViT contributes to the system’s performance. The proposed FER framework achieved an accuracy of 90.81%, surpassing several existing state-of-the-art FER systems. This research shows the strength of combining advanced face detection with transformer-based architecture for accurate emotion recognition from facial expressions in videos.
ISSN:2100-014X