Real-Time 3D Hand Gesture Detection from Depth Images

2013 ◽  
Vol 756-759 ◽  
pp. 4138-4142 ◽  
Author(s):  
Lin Song ◽  
Rui Min Hu ◽  
Hua Zhang ◽  
Yu Lian Xiao ◽  
Li Yu Gong

In this paper, we describe an real-time algorithm to detect 3D hand gestures from depth images. Firstly, we detect moving regions by frame difference; then, regions are refined by removing small regions and boundary regions; finally, foremost region is selected and its trajectories are classified using an automatic state machine. Experiments on Microsoft Kinect for Xbox captured sequences show the effectiveness and efficiency of our system.

2021 ◽  
Vol 39 (6) ◽  
pp. 1031-1040
Author(s):  
Azher A. Fahad ◽  
Hassan J. Hassan ◽  
Salma H. Abdullah

Hand gesture recognition is one of communication in which used bodily behavior to transmit several messages. This paper aims to detect hand gestures with the mobile device camera and create a customize dataset that used in deep learning model training to recognize hand gestures. The real-time approach was used for all these objectives: the first step is hand area detection; the second step is hand area storing in a dataset form to use in the future for model training. A framework for human contact was put in place by studying pictures recorded by the camera. It was converted the RGB color space image to the greyscale, the blurring method is used for object noise removing efficaciously. To highlight the edges and curves of the hand, the thresholding method is used. And subtraction of complex background is applied to detect moving objects from a static camera. The objectives of the paper were reliable and favorable which helps deaf and dumb people interact with the environment through the sign language fully approved to extract hand movements. Python language as a programming manner to discover hand gestures. This work has an efficient hand gesture detection process to address the problem of framing from real-time video.


Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


2017 ◽  
Vol 10 (27) ◽  
pp. 1329-1342 ◽  
Author(s):  
Javier O. Pinzon Arenas ◽  
Robinson Jimenez Moreno ◽  
Paula C. Useche Murillo

This paper presents the implementation of a Region-based Convolutional Neural Network focused on the recognition and localization of hand gestures, in this case 2 types of gestures: open and closed hand, in order to achieve the recognition of such gestures in dynamic backgrounds. The neural network is trained and validated, achieving a 99.4% validation accuracy in gesture recognition and a 25% average accuracy in RoI localization, which is then tested in real time, where its operation is verified through times taken for recognition, execution behavior through trained and untrained gestures, and complex backgrounds.


2013 ◽  
Vol 765-767 ◽  
pp. 2826-2829 ◽  
Author(s):  
Song Lin ◽  
Rui Min Hu ◽  
Yu Lian Xiao ◽  
Li Yu Gong

In this paper, we propose a novel real-time 3D hand gesture recognition algorithm based on depth information. We segment out the hand region from depth image and convert it to a point cloud. Then, 3D moment invariant features are computed at the point cloud. Finally, support vector machine (SVM) is employed to classify the shape of hand into different categories. We collect a benchmark dataset using Microsoft Kinect for Xbox and test the propose algorithm on it. Experimental results prove the robustness of our proposed algorithm.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4566
Author(s):  
Chanhwi Lee ◽  
Jaehan Kim ◽  
Seoungbae Cho ◽  
Jinwoong Kim ◽  
Jisang Yoo ◽  
...  

The use of human gesturing to interact with devices such as computers or smartphones has presented several problems. This form of interaction relies on gesture interaction technology such as Leap Motion from Leap Motion, Inc, which enables humans to use hand gestures to interact with a computer. The technology has excellent hand detection performance, and even allows simple games to be played using gestures. Another example is the contactless use of a smartphone to take a photograph by simply folding and opening the palm. Research on interaction with other devices via hand gestures is in progress. Similarly, studies on the creation of a hologram display from objects that actually exist are also underway. We propose a hand gesture recognition system that can control the Tabletop holographic display based on an actual object. The depth image obtained using the latest Time-of-Flight based depth camera Azure Kinect is used to obtain information about the hand and hand joints by using the deep-learning model CrossInfoNet. Using this information, we developed a real time system that defines and recognizes gestures indicating left, right, up, and down basic rotation, and zoom in, zoom out, and continuous rotation to the left and right.


Sign in / Sign up

Export Citation Format

Share Document