Multi-HM: A Chinese Multimodal Dataset and Fusion Framework for Emotion Recognition in Human–Machine Dialogue Systems

Sentiment analysis is pivotal in advancing human–computer interaction (HCI) systems as it enables emotionally intelligent responses. While existing models show potential for HCI applications, current conversational datasets exhibit critical limitations in real-world deployment, particularly in captu...

Full description

Saved in:
Bibliographic Details
Main Authors: Yao Fu, Qiong Liu, Qing Song, Pengzhou Zhang, Gongdong Liao
Format: Article
Language:English
Published: MDPI AG 2025-04-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/8/4509
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Sentiment analysis is pivotal in advancing human–computer interaction (HCI) systems as it enables emotionally intelligent responses. While existing models show potential for HCI applications, current conversational datasets exhibit critical limitations in real-world deployment, particularly in capturing domain-specific emotional dynamics and context-sensitive behavioral patterns—constraints that hinder semantic comprehension and adaptive capabilities in task-driven HCI scenarios. To address these gaps, we present Multi-HM, the first multimodal emotion recognition dataset explicitly designed for human–machine consultation systems. It contains 2000 professionally annotated dialogues across 10 major HCI domains. Our dataset employs a five-dimensional annotation framework that systematically integrates textual, vocal, and visual modalities while simulating authentic HCI workflows to encode pragmatic behavioral cues and mission-critical emotional trajectories. Experiments demonstrate that Multi-HM-trained models achieve state-of-the-art performance in recognizing task-oriented affective states. This resource establishes a crucial foundation for developing human-centric AI systems that dynamically adapt to users’ evolving emotional needs.
ISSN:2076-3417