Shuffling Augmented Decoupled Features for Multimodal Emotion Recognition
Multimodal emotion recognition (MER) aims to identify human emotions using data from multiple modalities. Despite promising advances in previous MER methods, their performance remains limited due to the small size of available datasets, a result of the challenges in collecting multimodal data. While...
Saved in:
| Main Author: | |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11014057/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850175107949920256 |
|---|---|
| author | Sunyoung Cho |
| author_facet | Sunyoung Cho |
| author_sort | Sunyoung Cho |
| collection | DOAJ |
| description | Multimodal emotion recognition (MER) aims to identify human emotions using data from multiple modalities. Despite promising advances in previous MER methods, their performance remains limited due to the small size of available datasets, a result of the challenges in collecting multimodal data. While data augmentation can address this issue, generating augmented multimodal data without altering the underlying emotional meaning remains particularly challenging. To tackle this problem, we introduce a decoupled feature augmentation method that automatically learns multimodal feature variations in a decoupled feature space for MER. Specifically, we decompose multimodal features into modality-invariant and modality-specific components and then augment each component within the decoupled feature space across multiple modalities. Unlike existing unimodal augmentation approaches, our method preserves cross-modal semantic consistency by jointly augmenting the decoupled components. To enhance model generalization and stability, we propose a learning strategy that gradually incorporates more diverse information by using a combined set of original and augmented decoupled features. Comprehensive experiments on two MER benchmarks demonstrate that our method outperforms or is comparable to several baseline methods. |
| format | Article |
| id | doaj-art-e2e9ea5cb3ac4a69b9bda6161ca429b9 |
| institution | OA Journals |
| issn | 2169-3536 |
| language | English |
| publishDate | 2025-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Access |
| spelling | doaj-art-e2e9ea5cb3ac4a69b9bda6161ca429b92025-08-20T02:19:31ZengIEEEIEEE Access2169-35362025-01-0113912909130010.1109/ACCESS.2025.357292511014057Shuffling Augmented Decoupled Features for Multimodal Emotion RecognitionSunyoung Cho0https://orcid.org/0000-0002-6925-6077Division of Software, Sookmyung Women’s University, Seoul, Republic of KoreaMultimodal emotion recognition (MER) aims to identify human emotions using data from multiple modalities. Despite promising advances in previous MER methods, their performance remains limited due to the small size of available datasets, a result of the challenges in collecting multimodal data. While data augmentation can address this issue, generating augmented multimodal data without altering the underlying emotional meaning remains particularly challenging. To tackle this problem, we introduce a decoupled feature augmentation method that automatically learns multimodal feature variations in a decoupled feature space for MER. Specifically, we decompose multimodal features into modality-invariant and modality-specific components and then augment each component within the decoupled feature space across multiple modalities. Unlike existing unimodal augmentation approaches, our method preserves cross-modal semantic consistency by jointly augmenting the decoupled components. To enhance model generalization and stability, we propose a learning strategy that gradually incorporates more diverse information by using a combined set of original and augmented decoupled features. Comprehensive experiments on two MER benchmarks demonstrate that our method outperforms or is comparable to several baseline methods.https://ieeexplore.ieee.org/document/11014057/Feature augmentationmultimodal emotion recognitionmultimodal learning |
| spellingShingle | Sunyoung Cho Shuffling Augmented Decoupled Features for Multimodal Emotion Recognition IEEE Access Feature augmentation multimodal emotion recognition multimodal learning |
| title | Shuffling Augmented Decoupled Features for Multimodal Emotion Recognition |
| title_full | Shuffling Augmented Decoupled Features for Multimodal Emotion Recognition |
| title_fullStr | Shuffling Augmented Decoupled Features for Multimodal Emotion Recognition |
| title_full_unstemmed | Shuffling Augmented Decoupled Features for Multimodal Emotion Recognition |
| title_short | Shuffling Augmented Decoupled Features for Multimodal Emotion Recognition |
| title_sort | shuffling augmented decoupled features for multimodal emotion recognition |
| topic | Feature augmentation multimodal emotion recognition multimodal learning |
| url | https://ieeexplore.ieee.org/document/11014057/ |
| work_keys_str_mv | AT sunyoungcho shufflingaugmenteddecoupledfeaturesformultimodalemotionrecognition |