Multimodal Emotion Recognition: Emotion Classification Through the Integration of EEG and Facial Expressions
Despite advances in the field of emotion recognition, the research field still faces two main limitations: the use of deep models for increasingly complex calculations and the identification of emotions through various data types. This study aims to advance the knowledge on multimodal emotion recogn...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10870204/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1823859624700280832 |
---|---|
author | Songul Erdem Guler Fatma Patlar Akbulut |
author_facet | Songul Erdem Guler Fatma Patlar Akbulut |
author_sort | Songul Erdem Guler |
collection | DOAJ |
description | Despite advances in the field of emotion recognition, the research field still faces two main limitations: the use of deep models for increasingly complex calculations and the identification of emotions through various data types. This study aims to advance the knowledge on multimodal emotion recognition by combining electroencephalography (EEG) signals with facial expressions, using advanced models such as Transformer, Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU). The results validate the effectiveness of this approach, demonstrating the high accuracy of the Gated Recurrent Unit (GRU) model, which achieved an average of 91.8% classification accuracy on unimodal (EEG-only) data and an average of 97.8% classification accuracy on multimodal (EEG and facial expressions) datasets in the multi-class emotion categories. The findings emphasize that by applying a multi-class classification framework, multimodal approaches offer significant improvements over traditional unimodal techniques. This work presents a framework that captures complex neural dynamics and visible emotional cues, enhancing the robustness and accuracy of emotion recognition systems. These results have important practical implications, showing how integrating various data sources with advanced models can overcome the limitations of single-modality systems. |
format | Article |
id | doaj-art-8faef79cdaeb4fa5a1a8191f9eac1713 |
institution | Kabale University |
issn | 2169-3536 |
language | English |
publishDate | 2025-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj-art-8faef79cdaeb4fa5a1a8191f9eac17132025-02-11T00:01:33ZengIEEEIEEE Access2169-35362025-01-0113245872460310.1109/ACCESS.2025.353864210870204Multimodal Emotion Recognition: Emotion Classification Through the Integration of EEG and Facial ExpressionsSongul Erdem Guler0https://orcid.org/0009-0004-3458-4355Fatma Patlar Akbulut1https://orcid.org/0000-0002-9689-7486Department of Computer Engineering, Istanbul Kültür University, İstanbul, TürkiyeDepartment of Computer Engineering, Istanbul Kültür University, İstanbul, TürkiyeDespite advances in the field of emotion recognition, the research field still faces two main limitations: the use of deep models for increasingly complex calculations and the identification of emotions through various data types. This study aims to advance the knowledge on multimodal emotion recognition by combining electroencephalography (EEG) signals with facial expressions, using advanced models such as Transformer, Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU). The results validate the effectiveness of this approach, demonstrating the high accuracy of the Gated Recurrent Unit (GRU) model, which achieved an average of 91.8% classification accuracy on unimodal (EEG-only) data and an average of 97.8% classification accuracy on multimodal (EEG and facial expressions) datasets in the multi-class emotion categories. The findings emphasize that by applying a multi-class classification framework, multimodal approaches offer significant improvements over traditional unimodal techniques. This work presents a framework that captures complex neural dynamics and visible emotional cues, enhancing the robustness and accuracy of emotion recognition systems. These results have important practical implications, showing how integrating various data sources with advanced models can overcome the limitations of single-modality systems.https://ieeexplore.ieee.org/document/10870204/Human computer interactionemotion recognitiondeep learningEEG signalsfacial expressions |
spellingShingle | Songul Erdem Guler Fatma Patlar Akbulut Multimodal Emotion Recognition: Emotion Classification Through the Integration of EEG and Facial Expressions IEEE Access Human computer interaction emotion recognition deep learning EEG signals facial expressions |
title | Multimodal Emotion Recognition: Emotion Classification Through the Integration of EEG and Facial Expressions |
title_full | Multimodal Emotion Recognition: Emotion Classification Through the Integration of EEG and Facial Expressions |
title_fullStr | Multimodal Emotion Recognition: Emotion Classification Through the Integration of EEG and Facial Expressions |
title_full_unstemmed | Multimodal Emotion Recognition: Emotion Classification Through the Integration of EEG and Facial Expressions |
title_short | Multimodal Emotion Recognition: Emotion Classification Through the Integration of EEG and Facial Expressions |
title_sort | multimodal emotion recognition emotion classification through the integration of eeg and facial expressions |
topic | Human computer interaction emotion recognition deep learning EEG signals facial expressions |
url | https://ieeexplore.ieee.org/document/10870204/ |
work_keys_str_mv | AT songulerdemguler multimodalemotionrecognitionemotionclassificationthroughtheintegrationofeegandfacialexpressions AT fatmapatlarakbulut multimodalemotionrecognitionemotionclassificationthroughtheintegrationofeegandfacialexpressions |