TMFN: a text-based multimodal fusion network with multi-scale feature extraction and unsupervised contrastive learning for multimodal sentiment analysis
Abstract Multimodal sentiment analysis (MSA) is crucial in human-computer interaction. Current methods use simple sub-models for feature extraction, neglecting multi-scale features and the complexity of emotions. Text, visual, and audio each have unique characteristics in MSA, with text often provid...
Saved in:
Main Authors: | Junsong Fu, Youjia Fu, Huixia Xue, Zihao Xu |
---|---|
Format: | Article |
Language: | English |
Published: |
Springer
2025-01-01
|
Series: | Complex & Intelligent Systems |
Subjects: | |
Online Access: | https://doi.org/10.1007/s40747-024-01724-5 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Design of an Integrated Model for Video Summarization Using Multimodal Fusion and YOLO for Crime Scene Analysis
by: Sai Babu Veesam, et al.
Published: (2025-01-01) -
Instance-level semantic segmentation of nuclei based on multimodal structure encoding
by: Bo Guan, et al.
Published: (2025-02-01) -
Copula-Driven Learning Techniques for Physical Layer Authentication Using Multimodal Data
by: Sahana Srikanth, et al.
Published: (2025-01-01) -
AN ENHANCED MULTIMODAL BIOMETRIC SYSTEM BASED ON CONVOLUTIONAL NEURAL NETWORK
by: LAWRENCE OMOTOSHO, et al.
Published: (2021-10-01) -
MDCKE: Multimodal deep-context knowledge extractor that integrates contextual information
by: Hyojin Ko, et al.
Published: (2025-04-01)