Recognizing American Sign Language gestures efficiently and accurately using a hybrid transformer model
Abstract Gesture recognition plays a vital role in computer vision, especially for interpreting sign language and enabling human–computer interaction. Many existing methods struggle with challenges like heavy computational demands, difficulty in understanding long-range relationships, sensitivity to...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-06-01
|
| Series: | Scientific Reports |
| Subjects: | |
| Online Access: | https://doi.org/10.1038/s41598-025-06344-8 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849433113838485504 |
|---|---|
| author | Mohammed Aly Islam S. Fathi |
| author_facet | Mohammed Aly Islam S. Fathi |
| author_sort | Mohammed Aly |
| collection | DOAJ |
| description | Abstract Gesture recognition plays a vital role in computer vision, especially for interpreting sign language and enabling human–computer interaction. Many existing methods struggle with challenges like heavy computational demands, difficulty in understanding long-range relationships, sensitivity to background noise, and poor performance in varied environments. While CNNs excel at capturing local details, they often miss the bigger picture. Vision Transformers, on the other hand, are better at modeling global context but usually require significantly more computational resources, limiting their use in real-time systems. To tackle these issues, we propose a Hybrid Transformer-CNN model that combines the strengths of both architectures. Our approach begins with CNN layers that extract detailed local features from both the overall hand and specific hand regions. These CNN features are then refined by a Vision Transformer module, which captures long-range dependencies and global contextual information within the gesture. This integration allows the model to effectively recognize subtle hand movements while maintaining computational efficiency. Tested on the ASL Alphabet dataset, our model achieves a high accuracy of 99.97%, runs at 110 frames per second, and requires only 5.0 GFLOPs—much less than traditional Vision Transformer models, which need over twice the computational power. Central to this success is our feature fusion strategy using element-wise multiplication, which helps the model focus on important gesture details while suppressing background noise. Additionally, we employ advanced data augmentation techniques and a training approach incorporating contrastive learning and domain adaptation to boost robustness. Overall, this work offers a practical and powerful solution for gesture recognition, striking an optimal balance between accuracy, speed, and efficiency—an important step toward real-world applications. |
| format | Article |
| id | doaj-art-caccf491ef1b4e53a3bb2789e2f0887a |
| institution | Kabale University |
| issn | 2045-2322 |
| language | English |
| publishDate | 2025-06-01 |
| publisher | Nature Portfolio |
| record_format | Article |
| series | Scientific Reports |
| spelling | doaj-art-caccf491ef1b4e53a3bb2789e2f0887a2025-08-20T03:27:10ZengNature PortfolioScientific Reports2045-23222025-06-0115112710.1038/s41598-025-06344-8Recognizing American Sign Language gestures efficiently and accurately using a hybrid transformer modelMohammed Aly0Islam S. Fathi1Department of Artificial Intelligence, Faculty of Artificial Intelligence, Egyptian Russian UniversityDepartment of Computer Science, Faculty of Information Technology, Ajloun National UniversityAbstract Gesture recognition plays a vital role in computer vision, especially for interpreting sign language and enabling human–computer interaction. Many existing methods struggle with challenges like heavy computational demands, difficulty in understanding long-range relationships, sensitivity to background noise, and poor performance in varied environments. While CNNs excel at capturing local details, they often miss the bigger picture. Vision Transformers, on the other hand, are better at modeling global context but usually require significantly more computational resources, limiting their use in real-time systems. To tackle these issues, we propose a Hybrid Transformer-CNN model that combines the strengths of both architectures. Our approach begins with CNN layers that extract detailed local features from both the overall hand and specific hand regions. These CNN features are then refined by a Vision Transformer module, which captures long-range dependencies and global contextual information within the gesture. This integration allows the model to effectively recognize subtle hand movements while maintaining computational efficiency. Tested on the ASL Alphabet dataset, our model achieves a high accuracy of 99.97%, runs at 110 frames per second, and requires only 5.0 GFLOPs—much less than traditional Vision Transformer models, which need over twice the computational power. Central to this success is our feature fusion strategy using element-wise multiplication, which helps the model focus on important gesture details while suppressing background noise. Additionally, we employ advanced data augmentation techniques and a training approach incorporating contrastive learning and domain adaptation to boost robustness. Overall, this work offers a practical and powerful solution for gesture recognition, striking an optimal balance between accuracy, speed, and efficiency—an important step toward real-world applications.https://doi.org/10.1038/s41598-025-06344-8Gesture recognitionSign language recognitionHybrid transformer-CNNDeep learningReal-time inference |
| spellingShingle | Mohammed Aly Islam S. Fathi Recognizing American Sign Language gestures efficiently and accurately using a hybrid transformer model Scientific Reports Gesture recognition Sign language recognition Hybrid transformer-CNN Deep learning Real-time inference |
| title | Recognizing American Sign Language gestures efficiently and accurately using a hybrid transformer model |
| title_full | Recognizing American Sign Language gestures efficiently and accurately using a hybrid transformer model |
| title_fullStr | Recognizing American Sign Language gestures efficiently and accurately using a hybrid transformer model |
| title_full_unstemmed | Recognizing American Sign Language gestures efficiently and accurately using a hybrid transformer model |
| title_short | Recognizing American Sign Language gestures efficiently and accurately using a hybrid transformer model |
| title_sort | recognizing american sign language gestures efficiently and accurately using a hybrid transformer model |
| topic | Gesture recognition Sign language recognition Hybrid transformer-CNN Deep learning Real-time inference |
| url | https://doi.org/10.1038/s41598-025-06344-8 |
| work_keys_str_mv | AT mohammedaly recognizingamericansignlanguagegesturesefficientlyandaccuratelyusingahybridtransformermodel AT islamsfathi recognizingamericansignlanguagegesturesefficientlyandaccuratelyusingahybridtransformermodel |