Deep Learning-Based Speech Emotion Recognition Using Multi-Level Fusion of Concurrent Features
The detection and classification of emotional states in speech involves the analysis of audio signals and text transcriptions. There are complex relationships between the extracted features at different time intervals which ought to be analyzed to infer the emotions in speech. These relationships...
Saved in:
| Main Authors: | Samuel, Kakuba, Alwin, Poulose, Dong, Seog Han, Senior Member, Ieee |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2023
|
| Subjects: | |
| Online Access: | http://hdl.handle.net/20.500.12493/921 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Attention-Based Multi-Learning Approach for Speech Emotion Recognition With Dilated Convolution
by: Samuel, Kakuba, et al.
Published: (2023) -
A fine-grained human facial key feature extraction and fusion method for emotion recognition
by: Shiwei Li, et al.
Published: (2025-02-01) -
Deep Fusion of Skeleton Spatial–Temporal and Dynamic Information for Action Recognition
by: Song Gao, et al.
Published: (2024-11-01) -
Exploration of Complementary Features for Speech Emotion Recognition Based on Kernel Extreme Learning Machine
by: Lili Guo, et al.
Published: (2019-01-01) -
Hierarchical Multi-Task Learning Based on Interactive Multi-Head Attention Feature Fusion for Speech Depression Recognition
by: Yujuan Xing, et al.
Published: (2025-01-01)