Low-resource MobileBERT for emotion recognition in imbalanced text datasets mitigating challenges with limited resources.

Modern dialogue systems rely on emotion recognition in conversation (ERC) as a core element enabling empathetic and human-like interactions. However, the weak correlation between emotions and semantics poses significant challenges to emotion recognition in dialogue. Semantically similar utterances c...

Full description

Saved in:
Bibliographic Details
Main Authors: Muhammad Hussain, Caikou Chen, Sami S Albouq, Khlood Shinan, Fatmah Alanazi, Muhammad Waseem Iqbal, M Usman Ashraf
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2025-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0312867
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Modern dialogue systems rely on emotion recognition in conversation (ERC) as a core element enabling empathetic and human-like interactions. However, the weak correlation between emotions and semantics poses significant challenges to emotion recognition in dialogue. Semantically similar utterances can express different types of emotions, depending on the context or speaker. In order to tackle this challenge, our paper proposes a novel loss called Focal Weighted Loss (FWL) with adversarial training and the compact language model MobileBERT. Our proposed loss function handles the problem of imbalanced emotion classification through Focal Weighted Loss and adversarial training and does not require large batch sizes or more computational resources. Our approach has been employed on four text emotion recognition benchmark datasets, MELD, EmoryNLP, DailyDialog and IEMOCAP demonstrating competitive performance. Extensive experiments on these benchmark datasets validate the effectiveness of our proposed FWL with adversarial training. This enables more human-like interactions on digital platforms. Our approach shows its potential to deliver competitive performance under limited resource constraints, comparable to large language models.
ISSN:1932-6203