scholarly journals A Novel Approach to Extract Hand Gesture Feature in Depth Images

Author(s):  
Honghai Liu ◽  
Zhaojie Ju ◽  
Xiaofei Ji ◽  
Chee Seng Chan ◽  
Mehdi Khoury
2015 ◽  
Vol 75 (19) ◽  
pp. 11929-11943 ◽  
Author(s):  
Zhaojie Ju ◽  
Dongxu Gao ◽  
Jiangtao Cao ◽  
Honghai Liu

Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


2019 ◽  
Vol 11 ◽  
pp. 175682931882232
Author(s):  
Navid Dorudian ◽  
Stanislao Lauria ◽  
Stephen Swift

A novel approach to detect micro air vehicles in GPS-denied environments using an external RGB-D sensor is presented. The nonparametric background subtraction technique incorporating several innovative mechanisms allows the detection of high-speed moving micro air vehicles by combining colour and depth information. The proposed method stores several colour and depth images as models and then compares each pixel from a frame with the stored models to classify the pixel as background or foreground. To adapt to scene changes, once a pixel is classified as background, the system updates the model by finding and substituting the closest pixel to the camera with the current pixel. The background model update presented uses different criteria from existing methods. Additionally, a blind update model is added to adapt to background sudden changes. The proposed architecture is compared with existing techniques using two different micro air vehicles and publicly available datasets. Results showing some improvements over existing methods are discussed.


Author(s):  
Rafael Radkowski ◽  
Christian Stritzke

This paper presents a comparison between 2D and 3D interaction techniques for Augmented Reality (AR) applications. The interaction techniques are based on hand gestures and a computer vision-based hand gesture recognition system. We have compared 2D gestures and 3D gestures for interaction in AR application. The 3D recognition system is based on a video camera, which provides an additional depth image to each 2D color image. Thus, spatial interactions become possible. Our major question during this work was: Do depth images and 3D interaction techniques improve the interaction with AR applications, respectively with virtual 3D objects? Therefore, we have tested and compared the hand gesture recognition systems. The results show two things: First, they show that the depth images facilitate a more robust hand recognition and gesture identification. Second, the results are a strong indication that 3D hand gesture interactions techniques are more intuitive than 2D hand gesture interaction techniques. In summary the results emphasis, that depth images improve the hand gesture interaction for AR applications.


2013 ◽  
Vol 756-759 ◽  
pp. 4138-4142 ◽  
Author(s):  
Lin Song ◽  
Rui Min Hu ◽  
Hua Zhang ◽  
Yu Lian Xiao ◽  
Li Yu Gong

In this paper, we describe an real-time algorithm to detect 3D hand gestures from depth images. Firstly, we detect moving regions by frame difference; then, regions are refined by removing small regions and boundary regions; finally, foremost region is selected and its trajectories are classified using an automatic state machine. Experiments on Microsoft Kinect for Xbox captured sequences show the effectiveness and efficiency of our system.


Sign in / Sign up

Export Citation Format

Share Document