Real-Time Static Gesture Recognition for Upper Extremity Rehabilitation Using the Leap Motion

Author(s):  
Shawn N. Gieser ◽  
Angie Boisselle ◽  
Fillia Makedon
Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4566
Author(s):  
Chanhwi Lee ◽  
Jaehan Kim ◽  
Seoungbae Cho ◽  
Jinwoong Kim ◽  
Jisang Yoo ◽  
...  

The use of human gesturing to interact with devices such as computers or smartphones has presented several problems. This form of interaction relies on gesture interaction technology such as Leap Motion from Leap Motion, Inc, which enables humans to use hand gestures to interact with a computer. The technology has excellent hand detection performance, and even allows simple games to be played using gestures. Another example is the contactless use of a smartphone to take a photograph by simply folding and opening the palm. Research on interaction with other devices via hand gestures is in progress. Similarly, studies on the creation of a hologram display from objects that actually exist are also underway. We propose a hand gesture recognition system that can control the Tabletop holographic display based on an actual object. The depth image obtained using the latest Time-of-Flight based depth camera Azure Kinect is used to obtain information about the hand and hand joints by using the deep-learning model CrossInfoNet. Using this information, we developed a real time system that defines and recognizes gestures indicating left, right, up, and down basic rotation, and zoom in, zoom out, and continuous rotation to the left and right.


2018 ◽  
Vol 15 (02) ◽  
pp. 1750022 ◽  
Author(s):  
Jing Li ◽  
Jianxin Wang ◽  
Zhaojie Ju

Gesture recognition plays an important role in human–computer interaction. However, most existing methods are complex and time-consuming, which limit the use of gesture recognition in real-time environments. In this paper, we propose a static gesture recognition system that combines depth information and skeleton data to classify gestures. Through feature fusion, hand digit gestures of 0–9 can be recognized accurately and efficiently. According to the experimental results, the proposed gesture recognition system is effective and robust, which is invariant to complex background, illumination changes, reversal, structural distortion, rotation, etc. We have tested the system both online and offline which proved that our system is satisfactory to real-time requirements, and therefore it can be applied to gesture recognition in real-world human–computer interaction systems.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4548
Author(s):  
Subok Kim ◽  
Seoho Park ◽  
Onseok Lee

An inexperienced therapist lacks the analysis of a patient’s movement. In addition, the patient does not receive objective feedback from the therapist due to the visual subjective judgment. The aim is to provide a guide for in-depth rehabilitation therapy in virtual space by continuously tracking the user’s wrist joint during Leap Motion Controller (LMC) activities and present the basic data to confirm steady therapy results in real-time. The conventional Box and Block Test (BBT) is commonly used in upper extremity rehabilitation therapy. It was modeled in proportion to the actual size and Auto Desk Inventor was used to perform the 3D modeling work. The created 3D object was then implemented in C # through Unity5.6.2p4 based on LMC. After obtaining a wrist joint motion value, the motion was analyzed by 3D graph. Healthy subjects (23 males and 25 females, n = 48) were enrolled in this study. There was no statistically significant counting difference between conventional BBT and system BBT. This indicates the possibility of effective diagnosis and evaluation of hemiplegic patients post-stroke. We can keep track of wrist joints, check real-time continuous feedback in the implemented virtual space, and provide the basic data for an LMC-based quantitative rehabilitation therapy guide.


Author(s):  
Mohit Panwar ◽  
Rohit Pandey ◽  
Rohan Singla ◽  
Kavita Saxena

Every day we see many people, who are facing illness like deaf, dumb etc. There are not as many technologies which help them to interact with each other. They face difficulty in interacting with others. Sign language is used by deaf and hard hearing people to exchange information between their own community and with other people. Computer recognition of sign language deals from sign gesture acquisition and continues till text/speech generation. Sign gestures can be classified as static and dynamic. However static gesture recognition is simpler than dynamic gesture recognition but both recognition systems are important to the human community. The ASL American sign language recognition steps are described in this survey. There are not as many technologies which help them to interact with each other. They face difficulty in interacting with others. Image classification and machine learning can be used to help computers recognize sign language, which could then be interpreted by other people. Earlier we have Glove-based method in which the person has to wear a hardware glove, while the hand movements are getting captured. It seems a bit uncomfortable for practical use. Here we use visual based method. Convolutional neural networks and mobile ssd model have been employed in this paper to recognize sign language gestures. Preprocessing was performed on the images, which then served as the cleaned input. Tensor flow is used for training of images. A system will be developed which serves as a tool for sign language detection. Tensor flow is used for training of images. Keywords: ASL recognition system, convolutional neural network (CNNs), classification, real time, tensor flow


Sign in / Sign up

Export Citation Format

Share Document