Computer Vision based Hand Gesture Recognition A Survey

2019 ◽  
Vol 7 (5) ◽  
pp. 507-515
Author(s):  
Shaminder Singh ◽  
Anuj Kumar Gupta ◽  
Tejwant Singh
Author(s):  
Panagiotis Tsinganos ◽  
Bruno Cornelis ◽  
Jan Cornelis ◽  
Bart Jansen ◽  
Athanassios Skodras

Over the past few years, Deep learning (DL) has revolutionized the field of data analysis. Not only are the algorithmic paradigms changed, but also the performance in various classification and prediction tasks has been significantly improved with respect to the state-of-the-art, especially in the area of computer vision. The progress made in computer vision has produced a spillover in many other domains, such as biomedical engineering. Some recent works are directed towards surface electromyography (sEMG) based hand gesture recognition, often addressed as an image classification problem and solved using tools such as Convolutional Neural Networks (CNN). This paper extends our previous work on the application of the Hilbert space-filling curve for the generation of image representations from multi-electrode sEMG signals, by investigating how the Hilbert curve compares to the Peano- and Z-order space-filling curves. The proposed space-filling mapping methods are evaluated on a variety of network architectures and in some cases yield a classification improvement of at least 3%, when used to structure the inputs before feeding them into the original network architectures.


2016 ◽  
Vol 11 (1) ◽  
pp. 30-35
Author(s):  
Manoj Acharya ◽  
Dibakar Raj Pant

This paper proposes a method to recognize static hand gestures in an image or video where a person is performing Nepali Sign Language (NSL) and translate it to words and sentences. The classification is carried out using Neural Network where contour of the hand is used as the feature. The work is verified successfully for NSL recognition using signer dependency analysis. Journal of the Institute of Engineering, 2015, 11(1): 30-35


2018 ◽  
Vol 218 ◽  
pp. 02014
Author(s):  
Arief Ramadhani ◽  
Achmad Rizal ◽  
Erwin Susanto

Computer vision is one of the fields of research that can be applied in a various subject. One application of computer vision is the hand gesture recognition system. The hand gesture is one of the ways to interact with computers or machines. In this study, hand gesture recognition was used as a password for electronic key systems. The hand gesture recognition in this study utilized the depth sensor in Microsoft Kinect Xbox 360. Depth sensor captured the hand image and segmented using a threshold. By scanning each pixel, we detected the thumb and the number of other fingers that open. The hand gesture recognition result was used as a password to unlock the electronic key. This system could recognize nine types of hand gesture represent number 1, 2, 3, 4, 5, 6, 7, 8, and 9. The average accuracy of the hand gesture recognition system was 97.78% for one single hand sign and 86.5% as password of three hand signs.


2014 ◽  
Vol 14 (01n02) ◽  
pp. 1450006 ◽  
Author(s):  
Mahmood Jasim ◽  
Tao Zhang ◽  
Md. Hasanuzzaman

This paper presents a novel method for computer vision-based static and dynamic hand gesture recognition. Haar-like feature-based cascaded classifier is used for hand area segmentation. Static hand gestures are recognized using linear discriminant analysis (LDA) and local binary pattern (LBP)-based feature extraction methods. Static hand gestures are classified using nearest neighbor (NN) algorithm. Dynamic hand gestures are recognized using the novel text-based principal directional features (PDFs), which are generated from the segmented image sequences. Longest common subsequence (LCS) algorithm is used to classify the dynamic gestures. For testing, the Chinese numeral gesture dataset containing static hand poses and directional gesture dataset containing complex dynamic gestures are prepared. The mean accuracy of LDA-based static hand gesture recognition on the Chinese numeral gesture dataset is 92.42%. The mean accuracy of LBP-based static hand gesture recognition on the Chinese numeral gesture dataset is 87.23%. The mean accuracy of the novel dynamic hand gesture recognition method using PDF on directional gesture dataset is 94%.


Author(s):  
Seema Rawat ◽  
Praveen Kumar ◽  
Ishita Singh ◽  
Shourya Banerjee ◽  
Shabana Urooj ◽  
...  

Human-Computer Interaction (HCI) interfaces need unambiguous instructions in the form of mouse clicks or keyboard taps from the user and thus gets complex. To simplify this monotonous task, a real-time hand gesture recognition method using computer vision, image, and video processing techniques has been proposed. Controlling infections has turned out to be the major concern of the healthcare environment. Several input devices such as keyboards, mouse, touch screens can be considered as a breeding ground for various micro pathogens and bacteria. Direct use of hands as an input device is an innovative method for providing natural HCI ensuring minimal physical contact with the devices i.e., less transmission of bacteria and thus can prevent cross infections. Convolutional Neural Network (CNN) has been used for object detection and classification. CNN architecture for 3d object recognition has been proposed which consists of two models: 1) A detector, a CNN architecture for detection of gestures; and 2) A classifier, a CNN for classification of the detected gestures. By using dynamic hand gesture recognition to interact with the system, the interactions can be increased with the help of multidimensional use of hand gestures as compared to other input methods. The dynamic hand gesture recognition method focuses to replace the mouse for interaction with the virtual objects. This work centralises the efforts of implementing a method that employs computer vision algorithms and gesture recognition techniques for developing a low-cost interface device for interacting with objects in the virtual environment such as screens using hand gestures.


Author(s):  
Md. Manik Ahmed ◽  
Md. Anwar Hossain ◽  
A F M Zainul Abadin

In recent few years, hand gesture recognition is one of the advanced grooming technologies in the era of human computer interaction and computer vision due to a wide area of application in the real world. But it is a very complicated task to recognize hand gesture easily due to gesture orientation, light condition, complex background, translation and scaling of gesture images. To remove this limitation, several research works have developed which is successfully decrease this complexity. However, the intention of this paper is proposed and compared four different hand gesture recognition system and apply some optimization technique on it which ridiculously increased the existing model accuracy and model running time. After employed the optimization tricks, the adjusted gesture recognition model accuracy was 93.21% and the run time was 224 seconds which was 2.14% and 248 seconds faster than an existing similar hand gesture recognition model. The overall achievement of this paper could be applied for smart home control, camera control, robot control, medical system, natural talk, and many other fields in computer vision and human-computer interaction.


undertaking in the field of human-computer interaction (HCI) and computer vision. 10 years prior to the undertaking appeared to be practically unsolvable with the data given by a single RGB camera. In this work, we have actualized a presumable exact strategy to perceive static gestures or image frames from a live camera or video data. As Hand Gesture Recognition is identified with two noteworthy fields of image processing and AI (machine learning), in this way, this report likewise refers to the different tools and APIs that can be utilized to implement different strategies and methods in these fields


Sign in / Sign up

Export Citation Format

Share Document