Research on Emotion Classification Based on Multi-modal Fusion
Nowadays, people's expression on the Internet is no longer limited to text, especially with the rise of the short video boom, leading to the emergence of a large number of modal data such as text, pictures, audio, and video. Compared to single mode data ,the multi-modal data always contains ma...
Saved in:
| Main Authors: | zhihua Xiang, Nor Haizan Mohamed Radzi, Haslina Hashim |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
University of Baghdad, College of Science for Women
2024-02-01
|
| Series: | مجلة بغداد للعلوم |
| Subjects: | |
| Online Access: | https://bsj.uobaghdad.edu.iq/index.php/BSJ/article/view/9454 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
MemoCMT: multimodal emotion recognition using cross-modal transformer-based feature fusion
by: Mustaqeem Khan, et al.
Published: (2025-02-01) -
Hybrid Multi-Attention Network for Audio–Visual Emotion Recognition Through Multimodal Feature Fusion
by: Sathishkumar Moorthy, et al.
Published: (2025-03-01) -
Multi-modal feature fusion with multi-head self-attention for epileptic EEG signals
by: Ning Huang, et al.
Published: (2024-08-01) -
Mifu-ER: Modality Quality Index-Based Incremental Fusion for Emotion Recognition
by: Sun-Hee Kim
Published: (2025-01-01) -
Complementarity-Oriented Feature Fusion for Face-Phone Trajectory Matching
by: Changfeng Cao, et al.
Published: (2025-01-01)