Adaptive GCN and Bi-GRU-Based Dual Branch for Motor Imagery EEG Decoding

Decoding motor imagery electroencephalography (MI-EEG) signals presents significant challenges due to the difficulty in capturing the complex functional connectivity between channels and the temporal dependencies of EEG signals across different periods. These challenges are exacerbated by the low sp...

Full description

Saved in:
Bibliographic Details
Main Authors: Yelan Wu, Pugang Cao, Meng Xu, Yue Zhang, Xiaoqin Lian, Chongchong Yu
Format: Article
Language:English
Published: MDPI AG 2025-02-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/25/4/1147
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Decoding motor imagery electroencephalography (MI-EEG) signals presents significant challenges due to the difficulty in capturing the complex functional connectivity between channels and the temporal dependencies of EEG signals across different periods. These challenges are exacerbated by the low spatial resolution and high signal redundancy inherent in EEG signals, which traditional linear models struggle to address. To overcome these issues, we propose a novel dual-branch framework that integrates an adaptive graph convolutional network (Adaptive GCN) and bidirectional gated recurrent units (Bi-GRUs) to enhance the decoding performance of MI-EEG signals by effectively modeling both channel correlations and temporal dependencies. The Chebyshev Type II filter decomposes the signal into multiple sub-bands giving the model frequency domain insights. The Adaptive GCN, specifically designed for the MI-EEG context, captures functional connectivity between channels more effectively than conventional GCN models, enabling accurate spatial–spectral feature extraction. Furthermore, combining Bi-GRU and Multi-Head Attention (MHA) captures the temporal dependencies across different time segments to extract deep time–spectral features. Finally, feature fusion is performed to generate the final prediction results. Experimental results demonstrate that our method achieves an average classification accuracy of 80.38% on the BCI-IV Dataset 2a and 87.49% on the BCI-I Dataset 3a, outperforming other state-of-the-art decoding approaches. This approach lays the foundation for future exploration of personalized and adaptive brain–computer interface (BCI) systems.
ISSN:1424-8220