Deep Learning-Based Speech Emotion Recognition Using Multi-Level Fusion of Concurrent Features
The detection and classification of emotional states in speech involves the analysis of audio signals and text transcriptions. There are complex relationships between the extracted features at different time intervals which ought to be analyzed to infer the emotions in speech. These relationships...
Saved in:
Main Authors: | Samuel, Kakuba, Alwin, Poulose, Dong, Seog Han, Senior Member, Ieee |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023
|
Subjects: | |
Online Access: | http://hdl.handle.net/20.500.12493/921 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Attention-Based Multi-Learning Approach for Speech Emotion Recognition With Dilated Convolution
by: Samuel, Kakuba, et al.
Published: (2023) -
Attention-based interactive multi-level feature fusion for named entity recognition
by: Yiwu Xu, et al.
Published: (2025-01-01) -
Fusion of MHSA and Boruta for key feature selection in power system transient angle stability
by: WANG Man, et al.
Published: (2025-01-01) -
Multiscale Adaptively Spatial Feature Fusion Network for Spacecraft Component Recognition
by: Wuxia Zhang, et al.
Published: (2025-01-01) -
Classification of Speech Emotion State Based on Feature Map Fusion of TCN and Pretrained CNN Model From Korean Speech Emotion Data
by: A-Hyeon Jo, et al.
Published: (2025-01-01)