scholarly journals Real Time Static Gesture Recognition using Time of Flight Camera

Hand gesture recognition is challenging task in machine vision due to similarity between inter class samples and high amount of variation in intra class samples. The gesture recognition independent of light intensity, independent of color has drawn some attention due to its requirement where system should perform during night time also. This paper provides an insight into dynamic hand gesture recognition using depth data and images collected from time of flight camera. It provides user interface to track down natural gestures. The area of interest and hand area is first segmented out using adaptive thresholding and region labeling. It is assumed that hand is the closet object to camera. A novel algorithm is proposed to segment the hand region only. The noise due to ToF camera measurement is eliminated by preprocessing algorithms. There are two algorithms which we have proposed for extracting the hand gestures features. The first algorithm is based on computing the region distance between the fingers and second one is about computing the shape descriptor of gesture boundary in radial fashion from the centroid of hand gestures. For matching the gesture the distance between two independent regions is computed for every row and column. Same process is repeated across the columns. The number of total region transitions are computed for every row and column. These number of transitions across rows and columns forms the feature vector. The proposed solution is easily able to deal with static and dynamic gestures. In case of second approach we compute the distance between the gesture centroid and shape boundaries at various angles from 0 to 360 degrees. These distances forms the feature vector. Comparison of result shows that this method is very effective in extracting the shape features and competent enough in terms of accuracy and speed. The gesture recognition algorithm mentioned in this paper can be used in automotive infotainment systems, consumer electronics where hardware needs to be cost effective and the response of the system should be fast enough.

2018 ◽  
Vol 14 (7) ◽  
pp. 155014771879075 ◽  
Author(s):  
Kiwon Rhee ◽  
Hyun-Chool Shin

In the recognition of electromyogram-based hand gestures, the recognition accuracy may be degraded during the actual stage of practical applications for various reasons such as electrode positioning bias and different subjects. Besides these, the change in electromyogram signals due to different arm postures even for identical hand gestures is also an important issue. We propose an electromyogram-based hand gesture recognition technique robust to diverse arm postures. The proposed method uses both the signals of the accelerometer and electromyogram simultaneously to recognize correct hand gestures even for various arm postures. For the recognition of hand gestures, the electromyogram signals are statistically modeled considering the arm postures. In the experiments, we compared the cases that took into account the arm postures with the cases that disregarded the arm postures for the recognition of hand gestures. In the cases in which varied arm postures were disregarded, the recognition accuracy for correct hand gestures was 54.1%, whereas the cases using the method proposed in this study showed an 85.7% average recognition accuracy for hand gestures, an improvement of more than 31.6%. In this study, accelerometer and electromyogram signals were used simultaneously, which compensated the effect of different arm postures on the electromyogram signals and therefore improved the recognition accuracy of hand gestures.


2019 ◽  
Vol 2019 ◽  
pp. 1-7 ◽  
Author(s):  
Peng Liu ◽  
Xiangxiang Li ◽  
Haiting Cui ◽  
Shanshan Li ◽  
Yafei Yuan

Hand gesture recognition is an intuitive and effective way for humans to interact with a computer due to its high processing speed and recognition accuracy. This paper proposes a novel approach to identify hand gestures in complex scenes by the Single-Shot Multibox Detector (SSD) deep learning algorithm with 19 layers of a neural network. A benchmark database with gestures is used, and general hand gestures in the complex scene are chosen as the processing objects. A real-time hand gesture recognition system based on the SSD algorithm is constructed and tested. The experimental results show that the algorithm quickly identifies humans’ hands and accurately distinguishes different types of gestures. Furthermore, the maximum accuracy is 99.2%, which is significantly important for human-computer interaction application.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3226
Author(s):  
Radu Mirsu ◽  
Georgiana Simion ◽  
Catalin Daniel Caleanu ◽  
Ioana Monica Pop-Calimanu

Gesture recognition is an intensively researched area for several reasons. One of the most important reasons is because of this technology’s numerous application in various domains (e.g., robotics, games, medicine, automotive, etc.) Additionally, the introduction of three-dimensional (3D) image acquisition techniques (e.g., stereovision, projected-light, time-of-flight, etc.) overcomes the limitations of traditional two-dimensional (2D) approaches. Combined with the larger availability of 3D sensors (e.g., Microsoft Kinect, Intel RealSense, photonic mixer device (PMD), CamCube, etc.), recent interest in this domain has sparked. Moreover, in many computer vision tasks, the traditional statistic top approaches were outperformed by deep neural network-based solutions. In view of these considerations, we proposed a deep neural network solution by employing PointNet architecture for the problem of hand gesture recognition using depth data produced by a time of flight (ToF) sensor. We created a custom hand gesture dataset, then proposed a multistage hand segmentation by designing filtering, clustering, and finding the hand in the volume of interest and hand-forearm segmentation. For comparison purpose, two equivalent datasets were tested: a 3D point cloud dataset and a 2D image dataset, both obtained from the same stream. Besides the advantages of the 3D technology, the accuracy of the 3D method using PointNet is proven to outperform the 2D method in all circumstances, even the 2D method that employs a deep neural network.


2016 ◽  
Vol 11 (1) ◽  
pp. 30-35
Author(s):  
Manoj Acharya ◽  
Dibakar Raj Pant

This paper proposes a method to recognize static hand gestures in an image or video where a person is performing Nepali Sign Language (NSL) and translate it to words and sentences. The classification is carried out using Neural Network where contour of the hand is used as the feature. The work is verified successfully for NSL recognition using signer dependency analysis. Journal of the Institute of Engineering, 2015, 11(1): 30-35


2020 ◽  
Vol 7 (2) ◽  
pp. 164
Author(s):  
Aditiya Anwar ◽  
Achmad Basuki ◽  
Riyanto Sigit

<p><em>Hand gestures are the communication ways for the deaf people and the other. Each hand gesture has a different meaning.  In order to better communicate, we need an automatic translator who can recognize hand movements as a word or sentence in communicating with deaf people. </em><em>This paper proposes a system to recognize hand gestures based on Indonesian Sign Language Standard. This system uses Myo Armband as hand gesture sensors. Myo Armband has 21 sensors to express the hand gesture data. Recognition process uses a Support Vector Machine (SVM) to classify the hand gesture based on the dataset of Indonesian Sign Language Standard. SVM yields the accuracy of 86.59% to recognize hand gestures as sign language.</em></p><p><em><strong>Keywords</strong></em><em>: </em><em>Hand Gesture Recognition, Feature Extraction, Indonesian Sign Language, Myo Armband, Moment Invariant</em></p>


2019 ◽  
Vol 8 (4) ◽  
pp. 1027-1029

In today’s world gesture recognition technologies are much newer. Hand gesture recognition is well known done using glove based technique. In our project the vehicle that can be controlled by hand gestures. The gesture controlled robot car is now we controlled by our hand sign and not in our older days controlled using buttons. The controller just they need to wear some small transmission sensor in his hand which acts as accelerometer and at end it receives signals using RF receiver sensor in the car. We refer this idea in some previous projects and we implement some extra features in our project. We uses different sensor to implement better. In previous cases there are some transmission problems but in this we alter the transmission phase and we also fixed extra antenna to transmit extra range of signals. The microcontrollers controls the movement of the robot in the same direction as our hand moves.


Author(s):  
Priyanka R. ◽  
Prahanya Sriram ◽  
Jayasree L. N. ◽  
Angelin Gladston

Gesture recognition is the most intuitive form of human-computer interface. Hand gestures provide a natural way for humans to interact with computers to perform a variety of different applications. However, factors such as complexity of hand gesture structures, differences in hand size, hand posture, and environmental illumination can influence the performance of hand gesture recognition algorithms. Considering the above factors, this paper aims to present a real time system for hand gesture recognition on the basis of detection of some meaningful shape-based features like orientation, center of mass, status of fingers, thumb in terms of raised or folded fingers of hand and their respective location in image. The internet is growing at a very fast pace. The use of web browser is also growing. Everyone has at least two or three most frequently visited website. Thus, in this paper, effectiveness of the gesture recognition and its ability to control the browser via the recognized hand gestures are experimented and the results are analyzed.


Sensors ◽  
2019 ◽  
Vol 19 (16) ◽  
pp. 3548 ◽  
Author(s):  
Piotr Kaczmarek ◽  
Tomasz Mańkowski ◽  
Jakub Tomczyński

In this paper, we present a putEMG dataset intended for the evaluation of hand gesture recognition methods based on sEMG signal. The dataset was acquired for 44 able-bodied subjects and include 8 gestures (3 full hand gestures, 4 pinches and idle). It consists of uninterrupted recordings of 24 sEMG channels from the subject’s forearm, RGB video stream and depth camera images used for hand motion tracking. Moreover, exemplary processing scripts are also published. The putEMG dataset is available under a Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0). The dataset was validated regarding sEMG amplitudes and gesture recognition performance. The classification was performed using state-of-the-art classifiers and feature sets. An accuracy of 90% was achieved for SVM classifier utilising RMS feature and for LDA classifier using Hudgin’s and Du’s feature sets. Analysis of performance for particular gestures showed that LDA/Du combination has significantly higher accuracy for full hand gestures, while SVM/RMS performs better for pinch gestures. The presented dataset can be used as a benchmark for various classification methods, the evaluation of electrode localisation concepts, or the development of classification methods invariant to user-specific features or electrode displacement.


Sign in / Sign up

Export Citation Format

Share Document