TransNN-MHA: A Transformer-Based Model to Distinguish Real and Imaginary Motor Intent for Assistive Robotics
Accurately distinguishing between real and imagined motor intent is a fundamental challenge in assistive robotics, as it directly affects human-machine interfaces’ (HMIs) ability to effectively interpret user intent. This distinction is particularly critical for individuals with disabilit...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10990212/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Accurately distinguishing between real and imagined motor intent is a fundamental challenge in assistive robotics, as it directly affects human-machine interfaces’ (HMIs) ability to effectively interpret user intent. This distinction is particularly critical for individuals with disabilities who may rely on imagined motor intent, as they cannot perform actual physical movements. In such cases, the ability to differentiate between real and imaginary motor actions allows HMIs to respond appropriately, which improves the precision and usability of assistive devices. Many modern approaches lack the precision required to differentiate these motor actions from Electroencephalogram (EEG) signals. Real-time applications need a high level of precision to guarantee smooth interaction and control, especially for users who depend on imagined movements. It’s important to understand the difference between real and imagined intent to accurately detect user intentions. This is particularly crucial for individuals with disabilities, as it facilitates more effective control of assistive technologies. In this article, we utilize the EEG Motor Movement/Imagery Dataset, consisting of 4087 training samples and 818 test samples, to develop TransNN-MHA, a new Transformer-based Neural Network that incorporates Multi-Head Attention (MHA) mechanisms for classifying real and imaginary motor actions. The proposed model employs a minimalist architecture that omits decoders and positional encodings to optimize EEG classification. We believe this is the first study focused on classifying real and imaginary motor actions using EEG data. We compare TransNN-MHA with Deep learning (CNN, GRU) and hybrid (CNN-Transformer, GRU-Transformer) models. TransNN-MHA achieves 92% accuracy, outperforming CNN-Transformer (86%) and GRU-Transformer (91%), as well as attention-based models like Self-Attention and Spatial-Temporal Transformers. Our novel use of transformers with MHA enhances the classification of EEG signals by capturing long-range dependencies that make it suitable for real-time intent detection in assistive robotics. TransNN-MHA shows strong results across motor tasks that demonstrate the potential for real-world applications where precision in human-machine interaction is critical. Implementation details and code are available at <uri>http://github.com/madibabaiasl/EEGIntentPaper</uri> |
|---|---|
| ISSN: | 2169-3536 |