Online hand gesture recognition using enhanced $N recogniser based on a depth camera

Author(s):  
Kisang Kim ◽  
Hyung Il Choi
2015 ◽  
Vol 17 (1) ◽  
pp. 29-39 ◽  
Author(s):  
Chong Wang ◽  
Zhong Liu ◽  
Shing-Chow Chan

Author(s):  
Dina Satybaldina ◽  
Gulzia Kalymova

Hand gesture recognition becomes a popular topic of deep learning and provides many application fields for bridging the human–computer barrier and has a positive impact on our daily life. The primary idea of our project is a static gesture acquisition from depth camera and to process the input images to train the deep convolutional neural network pre-trained on ImageNet dataset. Proposed system consists of gesture capture device (Intel® RealSense™ depth camera D435), pre-processing and image segmentation algorithms, feature extraction algorithm and object classification. For pre-processing and image segmentation algorithms computer vision methods from the OpenCV and Intel Real Sense libraries are used. The subsystem for features extracting and gestures classification is based on the modified VGG-16 by using the TensorFlow&Keras deep learning framework. Performance of the static gestures recognition system is evaluated using maching learning metrics. Experimental results show that the proposed model, trained on a database of 2000 images, provides high recognition accuracy both at the training and testing stages.


Sign in / Sign up

Export Citation Format

Share Document