DialogueMLLM: Transforming Multimodal Emotion Recognition in Conversation Through Instruction-Tuned MLLM
Multimodal Emotion Recognition in Conversation (MERC) is an advanced research area that integrates cross-modal understanding and contextual reasoning through text-speech-visual fusion, with applications spanning diverse scenarios including student emotion monitoring in high school classroom interact...
Saved in:
| Main Authors: | Yuanyuan Sun, Ting Zhou |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11088104/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
A Novel Adaptive Fine-Tuning Algorithm for Multimodal Models: Self-Optimizing Classification and Selection of High-Quality Datasets in Remote Sensing
by: Yi Ren, et al.
Published: (2025-05-01) -
Dynamic Tuning and Multi-Task Learning-Based Model for Multimodal Sentiment Analysis
by: Yi Liang, et al.
Published: (2025-06-01) -
Multi-HM: A Chinese Multimodal Dataset and Fusion Framework for Emotion Recognition in Human–Machine Dialogue Systems
by: Yao Fu, et al.
Published: (2025-04-01) -
LLaVA-docent: Instruction tuning with multimodal large language model to support art appreciation education
by: Unggi Lee, et al.
Published: (2024-12-01) -
Multimodal Pragmatic Markers of Feedback in Dialogue
by: Ludivine Crible, et al.
Published: (2025-05-01)