Multimodal sentiment analysis based on multi-layer feature fusion and multi-task learning
Abstract Multimodal sentiment analysis (MSA) aims to use a variety of sensors to obtain and process information to predict the intensity and polarity of human emotions. The main challenges faced by current multi-modal sentiment analysis include: how the model extracts emotional information in a sing...
Saved in:
Main Authors: | Yujian Cai, Xingguang Li, Yingyu Zhang, Jinsong Li, Fazheng Zhu, Lin Rao |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2025-01-01
|
Series: | Scientific Reports |
Online Access: | https://doi.org/10.1038/s41598-025-85859-6 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
TMFN: a text-based multimodal fusion network with multi-scale feature extraction and unsupervised contrastive learning for multimodal sentiment analysis
by: Junsong Fu, et al.
Published: (2025-01-01) -
Multi-task aquatic toxicity prediction model based on multi-level features fusion
by: Xin Yang, et al.
Published: (2025-02-01) -
MM-HiFuse: multi-modal multi-task hierarchical feature fusion for esophagus cancer staging and differentiation classification
by: Xiangzuo Huo, et al.
Published: (2025-01-01) -
Multi-Attention Fusion Modeling for Sentiment Analysis of Educational Big Data
by: Guanlin Zhai, et al.
Published: (2020-12-01) -
Multi-modal feature fusion with multi-head self-attention for epileptic EEG signals
by: Ning Huang, et al.
Published: (2024-08-01)