A Portable and Affordable Four-Channel EEG System for Emotion Recognition with Self-Supervised Feature Learning

Emotions play a pivotal role in shaping human decision-making, behavior, and physiological well-being. Electroencephalography (EEG)-based emotion recognition offers promising avenues for real-time self-monitoring and affective computing applications. However, existing commercial solutions are often...

Full description

Saved in:
Bibliographic Details
Main Authors: Hao Luo, Haobo Li, Wei Tao, Yi Yang, Chio-In Ieong, Feng Wan
Format: Article
Language:English
Published: MDPI AG 2025-05-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/13/10/1608
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Emotions play a pivotal role in shaping human decision-making, behavior, and physiological well-being. Electroencephalography (EEG)-based emotion recognition offers promising avenues for real-time self-monitoring and affective computing applications. However, existing commercial solutions are often hindered by high costs, complicated deployment processes, and limited reliability in practical settings. To address these challenges, we propose a low-cost, self-adaptive wearable EEG system for emotion recognition through a hardware–algorithm co-design approach. The proposed system is a four-channel wireless EEG acquisition device supporting both dry and wet electrodes, with a component cost below USD 35. It features over 7 h of continuous operation, plug-and-play functionality, and modular expandability. At the algorithmic level, we introduce a self-supervised feature extraction framework that combines contrastive learning and masked prediction tasks, enabling robust emotional feature learning from a limited number of EEG channels with constrained signal quality. Our approach attains the highest performance of 60.2% accuracy and 59.4% Macro-F1 score on our proposed platform. Compared to conventional feature-based approaches, it demonstrates a maximum accuracy improvement of up to 20.4% using a multilayer perceptron classifier in our experiment.
ISSN:2227-7390