EEG-SKDNet: A Self-Knowledge Distillation Model With Scaled Weights for Emotion Recognition From EEG Signals

Electroencephalogram-based emotion recognition has garnered increasing attention due to its potential in human–computer interaction and affective computing. While recent deep learning methods have achieved remarkable performance in this task, most approaches emphasize accuracy at the expe...

Full description

Saved in:
Bibliographic Details
Main Authors: Thuong Duong Thi Mai, Duc-Quang Vu, Huy Nguyen Phuong, Trung-Nghia Phung
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11106442/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Electroencephalogram-based emotion recognition has garnered increasing attention due to its potential in human–computer interaction and affective computing. While recent deep learning methods have achieved remarkable performance in this task, most approaches emphasize accuracy at the expense of computational efficiency, making them impractical for real-time applications or deployment on resource-constrained devices. This paper addresses the critical challenge of achieving high-performance electroencephalography-based emotion recognition with low computational cost by introducing a lightweight yet robust learning strategy. In this paper, we propose a novel self-knowledge distillation framework that requires no teacher model. Unlike conventional knowledge distillation approaches that rely on large pre-trained teacher networks, our method leverages two different augmented views of the electroencephalography input, which are passed through a single student model to generate diverse predictions. These predictions are then used to transfer knowledge internally within the model. To enhance this self-distillation process, we introduce a scaled-weights mechanism that dynamically adjusts the contribution of each soft label based on the original input, allowing the model to focus on electroencephalography segments with more informative or high-intensity signal regions. The experiment results have shown that our proposed framework consistently outperforms the baseline and even state-of-the-art deep models, achieving a superior trade-off among performance, model size, computation cost and inference time. This makes our proposed framework highly suitable for deployment in real-time and edge-based emotion recognition applications.
ISSN:2169-3536