Knowledge enhancement for speech emotion recognition via multi-level acoustic feature

Speech emotion recognition (SER) has become an increasingly attractive machine learning task for domain applications. It aims to improve the discriminative capacity of speech emotion utilising a certain type of features (e.g. MFCC, Spectrograms, Wav2vec2) or multi-type combination features. However,...

Full description

Saved in:
Bibliographic Details
Main Authors: Huan Zhao, Nianxin Huang, Haijiao Chen
Format: Article
Language:English
Published: Taylor & Francis Group 2024-12-01
Series:Connection Science
Subjects:
Online Access:https://www.tandfonline.com/doi/10.1080/09540091.2024.2312103
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Speech emotion recognition (SER) has become an increasingly attractive machine learning task for domain applications. It aims to improve the discriminative capacity of speech emotion utilising a certain type of features (e.g. MFCC, Spectrograms, Wav2vec2) or multi-type combination features. However, the potential of acoustic-related deep features is frequently overlooked in existing approaches that rely solely on a single type of feature or employ a basic combination of multiple feature types. To address this challenge, a multi-level acoustic feature cross-fusion approach is proposed, aiming to compensate for missing information between various features. It helps to enhance the SER performance by integrating different types of knowledge through the cross-fusion mechanism. Moreover, multi-task learning is utilised to share useful information through gender recognition, which can also obtain multiple common representations in a fine-grained space. Experimental results show that the fusion approach can capture the inner connections between multilevel acoustic features to refine the knowledge. The SOTA results were obtained under the same experimental conditions.
ISSN:0954-0091
1360-0494