Detailed Information

Cited 0 time in webofscience Cited 1 time in scopus
Metadata Downloads

Online Hand Gesture Recognition Using Semantically Interpretable Attention Mechanismopen access

Authors
Chae, Moon JuHan, Sang HoonNam, HyeokPark, Jae HyeonCha, Min HeeCho, Sung In
Issue Date
2025
Publisher
IEEE
Keywords
cross-attention; Hand gesture recognition; intraframe and interframe information; online recognition
Citation
IEEE Access, v.13, pp 32329 - 32340
Pages
12
Indexed
SCIE
SCOPUS
Journal Title
IEEE Access
Volume
13
Start Page
32329
End Page
32340
URI
https://scholarworks.dongguk.edu/handle/sw.dongguk/57923
DOI
10.1109/ACCESS.2025.3540721
ISSN
2169-3536
2169-3536
Abstract
Hand gesture recognition (HGR) is a field of action recognition widely used in various domains such as robotics, virtual reality (VR), and augmented reality (AR). In this paper, we propose a semantically interpretable attention technique based on the compression and exchange of local and global information for real-time dynamic hand gesture recognition. In this research, we focus on data comprising hand landmark coordinates and online recognition of multiple gestures within a single sequence. Specifically, our approach has two paths to learn intraframe and interframe information separately. The learned information is compressed in the local and global perspectives, and the compressed information is exchanged through cross-attention. By using this approach, the importance of each hand landmark and frame, which can be interpreted semantically, can be extracted, and this information is used in the attention process on the intraframe and interframe information. Finally, the intraframe and interframe information to which attention is applied is integrated, which effectively enables comprehensive feature extraction of both local and global information. Experimental results demonstrated that the proposed method enabled concise and rapid hand-gesture recognition. It provided 95% accuracy in real-time hand-gesture recognition on a SHREC’22 dataset and accurately estimated the conclusion of a given gesture. Additionally, with a speed of approximately 294 frames per second (FPS), our model is well-suited for real-time systems, offering users immersive experience. This demonstrates its potential for effective application in real-world environments. © 2025 The Authors.
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE