Federated Learning for Multimodal Sentiment Analysis: Advancing Global Models With an Enhanced LinkNet Architecture
Analyzing sentiments using single-modal approaches, such as text or image analysis alone, frequently encounters significant limitations. These drawbacks include inadequate feature representation, an inability to capture the full complexity of emotional expressions, and challenges in handling diverse...
Saved in:
| Main Authors: | P. Vasanthi, V. Madhu Viswanatham |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2024-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10758628/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Adaptive multimodal transformer based on exchanging for multimodal sentiment analysis
by: Gulanbaier Tuerhong, et al.
Published: (2025-07-01) -
Emoji multimodal microblog sentiment analysis based on mutual attention mechanism
by: Yinxia Lou, et al.
Published: (2024-11-01) -
CMDAF: Cross-Modality Dual-Attention Fusion Network for Multimodal Sentiment Analysis
by: Wang Guo, et al.
Published: (2024-12-01) -
Review on Key Techniques of Video Multimodal Sentiment Analysis
by: DUAN Zongtao, HUANG Junchen, ZHU Xiaole
Published: (2025-03-01) -
Multimodal Sentiment Analysis Based on Expert Mixing of Subtask Representations
by: Ling Lei, et al.
Published: (2025-01-01)