Intelligent emotion recognition for drivers using model-level multimodal fusion
Unstable emotions are considered to be an important factor contributing to traffic accidents. The probability of accidents can be reduced if emotional anomalies of drivers can be quickly identified and intervened. In this paper, we present a multimodal emotion recognition model, MHLT, which performs...
Saved in:
| Main Authors: | Xing Luan, Quan Wen, Bo Hang |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Frontiers Media S.A.
2025-07-01
|
| Series: | Frontiers in Physics |
| Subjects: | |
| Online Access: | https://www.frontiersin.org/articles/10.3389/fphy.2025.1599428/full |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Dual-Branch Multimodal Fusion Network for Driver Facial Emotion Recognition
by: Le Wang, et al.
Published: (2024-10-01) -
MSDSANet: Multimodal Emotion Recognition Based on Multi-Stream Network and Dual-Scale Attention Network Feature Representation
by: Weitong Sun, et al.
Published: (2025-03-01) -
Multi-HM: A Chinese Multimodal Dataset and Fusion Framework for Emotion Recognition in Human–Machine Dialogue Systems
by: Yao Fu, et al.
Published: (2025-04-01) -
MultiModal Emotional Recognition by Artificial Intelligence and its Application in Psychology
by: Seyed Sadegh Hosseini, et al.
Published: (2024-10-01) -
Shuffling Augmented Decoupled Features for Multimodal Emotion Recognition
by: Sunyoung Cho
Published: (2025-01-01)