Research on Emotion Classification Based on Multi-modal Fusion
Nowadays, people's expression on the Internet is no longer limited to text, especially with the rise of the short video boom, leading to the emergence of a large number of modal data such as text, pictures, audio, and video. Compared to single mode data ,the multi-modal data always contains ma...
Saved in:
| Main Authors: | zhihua Xiang, Nor Haizan Mohamed Radzi, Haslina Hashim |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
University of Baghdad, College of Science for Women
2024-02-01
|
| Series: | مجلة بغداد للعلوم |
| Subjects: | |
| Online Access: | https://bsj.uobaghdad.edu.iq/index.php/BSJ/article/view/9454 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
A robust and accurate feature matching method for multi-modal geographic images spatial registration
by: Kai Ren, et al.
Published: (2025-05-01) -
Deep Learning-Based Speech Emotion Recognition Using Multi-Level Fusion of Concurrent Features
by: Samuel, Kakuba, et al.
Published: (2023) -
An adaptive feature fusion strategy using dual-layer attention and multi-modal deep reinforcement learning for all-media similarity search
by: Jin Yue, et al.
Published: (2025-05-01) -
MSM: a scaling-based feature matching algorithm for images with large-scale differences
by: Qifeng Ge, et al.
Published: (2025-08-01) -
Cross-modal gated feature enhancement for multimodal emotion recognition in conversations
by: Shiyun Zhao, et al.
Published: (2025-08-01)