Multi-Modal Fused-Attention Network for Depression Level Recognition Based on Enhanced Audiovisual Cues
In recent years, substantial research has focused on automated systems for assessing depression levels using different types of data, such as audio and visual inputs. However, signals recorded from individuals with depression can be influenced by external factors, such as the recording equipment and...
Saved in:
| Main Authors: | Yihan Zhou, Xiaokang Yu, Zixi Huang, Feierdun Palati, Zeyu Zhao, Zihan He, Yuan Feng, Yuxi Luo |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10904116/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
The modulation of selective attention and divided attention on cross-modal congruence
by: Honghui Xu, et al.
Published: (2025-04-01) -
A Cross-Modal Emergency Recognition Method Integrating Attentional Collaboration and Contrastive Learning
by: HUANG Shaonian, et al.
Published: (2025-01-01) -
DCLMA: Deep correlation learning with multi-modal attention for visual-audio retrieval
by: Jiwei Zhang, et al.
Published: (2025-09-01) -
Multi-Query Cross-Modal Attention Fusion for Cognitive Impairment Recognition
by: Minghui Zhao, et al.
Published: (2025-01-01) -
Hybrid Multi-Attention Network for Audio–Visual Emotion Recognition Through Multimodal Feature Fusion
by: Sathishkumar Moorthy, et al.
Published: (2025-03-01)