Online Hand Gesture Recognition Using Semantically Interpretable Attention Mechanism

Hand gesture recognition (HGR) is a field of action recognition widely used in various domains such as robotics, virtual reality (VR), and augmented reality (AR). In this paper, we propose a semantically interpretable attention technique based on the compression and exchange of local and global info...

Full description

Saved in:
Bibliographic Details
Main Authors: Moon Ju Chae, Sang Hoon Han, Hyeok Nam, Jae Hyeon Park, Min Hee Cha, Sung In Cho
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10879500/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Hand gesture recognition (HGR) is a field of action recognition widely used in various domains such as robotics, virtual reality (VR), and augmented reality (AR). In this paper, we propose a semantically interpretable attention technique based on the compression and exchange of local and global information for real-time dynamic hand gesture recognition. In this research, we focus on data comprising hand landmark coordinates and online recognition of multiple gestures within a single sequence. Specifically, our approach has two paths to learn intraframe and interframe information separately. The learned information is compressed in the local and global perspectives, and the compressed information is exchanged through cross-attention. By using this approach, the importance of each hand landmark and frame, which can be interpreted semantically, can be extracted, and this information is used in the attention process on the intraframe and interframe information. Finally, the intraframe and interframe information to which attention is applied is integrated, which effectively enables comprehensive feature extraction of both local and global information. Experimental results demonstrated that the proposed method enabled concise and rapid hand-gesture recognition. It provided 95% accuracy in real-time hand-gesture recognition on a SHREC’22 dataset and accurately estimated the conclusion of a given gesture. Additionally, with a speed of approximately 294 frames per second (FPS), our model is well-suited for real-time systems, offering users immersive experience. This demonstrates its potential for effective application in real-world environments.
ISSN:2169-3536