scholarly journals Hand Gesture Recognition Based on Computer Vision: A Review of Techniques

2020 ◽  
Vol 6 (8) ◽  
pp. 73 ◽  
Author(s):  
Munir Oudah ◽  
Ali Al-Naji ◽  
Javaan Chahl

Hand gestures are a form of nonverbal communication that can be used in several fields such as communication between deaf-mute people, robot control, human–computer interaction (HCI), home automation and medical applications. Research papers based on hand gestures have adopted many different techniques, including those based on instrumented sensor technology and computer vision. In other words, the hand sign can be classified under many headings, such as posture and gesture, as well as dynamic and static, or a hybrid of the two. This paper focuses on a review of the literature on hand gesture techniques and introduces their merits and limitations under different circumstances. In addition, it tabulates the performance of these methods, focusing on computer vision techniques that deal with the similarity and difference points, technique of hand segmentation used, classification algorithms and drawbacks, number and types of gestures, dataset used, detection range (distance) and type of camera used. This paper is a thorough general overview of hand gesture methods with a brief discussion of some possible applications.

2016 ◽  
Vol 11 (1) ◽  
pp. 30-35
Author(s):  
Manoj Acharya ◽  
Dibakar Raj Pant

This paper proposes a method to recognize static hand gestures in an image or video where a person is performing Nepali Sign Language (NSL) and translate it to words and sentences. The classification is carried out using Neural Network where contour of the hand is used as the feature. The work is verified successfully for NSL recognition using signer dependency analysis. Journal of the Institute of Engineering, 2015, 11(1): 30-35


Author(s):  
Pranjali Manmode ◽  
Rupali Saha ◽  
Manisha N. Amnerkar

With the rapid development of computer vision, the demand for interaction between humans and machines is becoming more and more extensive. Since hand gestures can express enriched information, hand gesture recognition is widely used in robot control, intelligent furniture, and other aspects. The paper realizes the segmentation of hand gestures by establishing the skin color model and AdaBoost classifier based on haar according to the particularity of skin color for hand gestures and the denaturation of hand gestures with one frame of video being cut for analysis. In this regard, the human hand is segmented from a complicated background. The camshaft algorithm also realizes real-time hand gesture tracking. Then, the area of hand gestures detected in real-time is recognized by a convolutional neural network to discover the recognition of 10 common digits. Experiments show 98.3% accuracy.


2014 ◽  
Vol 14 (01n02) ◽  
pp. 1450006 ◽  
Author(s):  
Mahmood Jasim ◽  
Tao Zhang ◽  
Md. Hasanuzzaman

This paper presents a novel method for computer vision-based static and dynamic hand gesture recognition. Haar-like feature-based cascaded classifier is used for hand area segmentation. Static hand gestures are recognized using linear discriminant analysis (LDA) and local binary pattern (LBP)-based feature extraction methods. Static hand gestures are classified using nearest neighbor (NN) algorithm. Dynamic hand gestures are recognized using the novel text-based principal directional features (PDFs), which are generated from the segmented image sequences. Longest common subsequence (LCS) algorithm is used to classify the dynamic gestures. For testing, the Chinese numeral gesture dataset containing static hand poses and directional gesture dataset containing complex dynamic gestures are prepared. The mean accuracy of LDA-based static hand gesture recognition on the Chinese numeral gesture dataset is 92.42%. The mean accuracy of LBP-based static hand gesture recognition on the Chinese numeral gesture dataset is 87.23%. The mean accuracy of the novel dynamic hand gesture recognition method using PDF on directional gesture dataset is 94%.


Author(s):  
Seema Rawat ◽  
Praveen Kumar ◽  
Ishita Singh ◽  
Shourya Banerjee ◽  
Shabana Urooj ◽  
...  

Human-Computer Interaction (HCI) interfaces need unambiguous instructions in the form of mouse clicks or keyboard taps from the user and thus gets complex. To simplify this monotonous task, a real-time hand gesture recognition method using computer vision, image, and video processing techniques has been proposed. Controlling infections has turned out to be the major concern of the healthcare environment. Several input devices such as keyboards, mouse, touch screens can be considered as a breeding ground for various micro pathogens and bacteria. Direct use of hands as an input device is an innovative method for providing natural HCI ensuring minimal physical contact with the devices i.e., less transmission of bacteria and thus can prevent cross infections. Convolutional Neural Network (CNN) has been used for object detection and classification. CNN architecture for 3d object recognition has been proposed which consists of two models: 1) A detector, a CNN architecture for detection of gestures; and 2) A classifier, a CNN for classification of the detected gestures. By using dynamic hand gesture recognition to interact with the system, the interactions can be increased with the help of multidimensional use of hand gestures as compared to other input methods. The dynamic hand gesture recognition method focuses to replace the mouse for interaction with the virtual objects. This work centralises the efforts of implementing a method that employs computer vision algorithms and gesture recognition techniques for developing a low-cost interface device for interacting with objects in the virtual environment such as screens using hand gestures.


Author(s):  
K M Bilvika ◽  
Sneha B K ◽  
Sahana K M ◽  
Tejaswini S M Patil

In human-computer interaction or sign language interpretation, recognizing hand gestures and face detection become predominant in computer vision research. The primary goal of this proposed system is to create a system, which can identify hand gestures and facial detection to convey information for controlling media player. For those who are deaf and dumb sign language is a common, efficient and alternative way for talking, by using the hand and facial gestures we can easily understand them. Here hand and face are directly use as the input to the device for effective communication purpose of gesture identification there is no need of an intermediate medium.


Sign language recognition is important for natural and convenient communication between deaf community and hearing majority. Hand gestures are a form of nonverbal communication that makes up the bulk of the communication between mute individuals, as sign language constitutes largely of hand gestures. Research works based on hand gestures have adopted many different techniques, including those based on instrumented sensor technology and computer vision. In other words, the hand sign can be classified under many headings, such as posture and gesture, as well as dynamic and static, or a hybrid of the two. This paper focuses on a review of the literature on computer based sign language recognition approaches, their motivations, techniques, observed limitations and suggestion for improvement.


Author(s):  
DSS Varshika

In this Project we try to control our media player using hand gestures with the help of OpenCV and Python. Computer applications require interaction between human and computer. This interaction needs to be unrestricted and it has made it challenging to traditional input devices such as keyboard, mouse, pen etc. Hand gesture is an important component of body languages in linguistics. Human computer interaction becomes easy with the use of the hand as a device. Use of hand gestures to operate machines would make interaction interesting. Gesture recognition has gained a lot of importance. Hand gestures are used to control various applications like windows media player, robot control, gaming etc. Use of gesture makes interaction easy, convenient and does not require any extra device. Vision and audio recognition can be used together. But audio commands may not work in noisy environments.


2020 ◽  
Author(s):  
Nirmala J S ◽  
Ajeet Kumar ◽  
Adith Jose E A ◽  
Kapil Kumar ◽  
Abhishek R Malvadkar

Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


Author(s):  
Jonas Robin ◽  
Mehul Rajesh Soni ◽  
Rishabh Rajkumar Dubey ◽  
Nimish Arvind Datkhile ◽  
Jyoti Kolap

Sign in / Sign up

Export Citation Format

Share Document