HAND GESTURE RECOGNITION IN REAL-TIME

Author(s):  
Priyanshi Gupta ◽  
Amita Goel ◽  
Nidhi Sengar ◽  
Vashudha Bahl

Hand gesture is language through which normal people can communicate with deaf and dumb people. Hand gesture recognition detects the hand pose and converts it to the corresponding alphabet or sentence. In past years it received great attention from society because of its application. It uses machine learning algorithms. Hand gesture recognition is a great application of human computer interaction. An emerging research field that is based on human centered computing aims to understand human gestures and integrate users and their social context with computer systems. One of the unique and challenging applications in this framework is to collect information about human dynamic gestures. Keywords: Tensor Flow, Machine learning, React js, handmark model, media pipeline

Author(s):  
Abhishek Sharma ◽  
Shubham Sharma

Hand gesture is language through which normal people can communicate with deaf and dumb people. Hand gesture recognition detects the hand pose and converts it to the corresponding alphabet or sentence. In past years it received great attention from society because of its application. It uses machine learning algorithms. Hand gesture recognition is a great application of human computer interaction. An emerging research field that is based on human centered computing aims to understand human gestures and integrate users and their social context with computer systems. One of the unique and challenging applications in this framework is to collect information about human dynamic gestures. Keywords: Covid-19, SIRD model, Linear Regression, XGBoost, Random Forest Regression, SVR, LightGBM, Machine learning, Intervention.


2020 ◽  
Vol 1 (3) ◽  
pp. 116-120
Author(s):  
Abhishek B. ◽  
Kanya Krishi ◽  
Meghana M. ◽  
Mohammed Daaniyaal ◽  
Anupama H. S.

Gesture recognition is an emerging topic in today’s technologies. The main focus of this is to recognize the human gestures using mathematical algorithms for human computer interaction. Only a few modes of Human-Computer Interaction exist, they are: through keyboard, mouse, touch screens etc. Each of these devices has their own limitations when it comes to adapting more versatile hardware in computers. Gesture recognition is one of the essential techniques to build user-friendly interfaces. Usually gestures can be originated from any bodily motion or state, but commonly originate from the face or hand. Gesture recognition enables users to interact with the devices without physically touching them. This paper describes how hand gestures are trained to perform certain actions like switching pages, scrolling up or down in a page.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1504
Author(s):  
Mohammed Asfour ◽  
Carlo Menon ◽  
Xianta Jiang

ForceMyography (FMG) is an emerging competitor to surface ElectroMyography (sEMG) for hand gesture recognition. Most of the state-of-the-art research in this area explores different machine learning algorithms or feature engineering to improve hand gesture recognition performance. This paper proposes a novel signal processing pipeline employing a manifold learning method to produce a robust signal representation to boost hand gesture classifiers’ performance. We tested this approach on an FMG dataset collected from nine participants in 3 different data collection sessions with short delays between each. For each participant’s data, the proposed pipeline was applied, and then different classification algorithms were used to evaluate the effect of the pipeline compared to raw FMG signals in hand gesture classification. The results show that incorporating the proposed pipeline reduced variance within the same gesture data and notably maximized variance between different gestures, allowing improved robustness of hand gestures classification performance and consistency across time. On top of that, the pipeline improved the classification accuracy consistently regardless of different classifiers, gaining an average of 5% accuracy improvement.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 1007
Author(s):  
Chi Xu ◽  
Yunkai Jiang ◽  
Jun Zhou ◽  
Yi Liu

Hand gesture recognition and hand pose estimation are two closely correlated tasks. In this paper, we propose a deep-learning based approach which jointly learns an intermediate level shared feature for these two tasks, so that the hand gesture recognition task can be benefited from the hand pose estimation task. In the training process, a semi-supervised training scheme is designed to solve the problem of lacking proper annotation. Our approach detects the foreground hand, recognizes the hand gesture, and estimates the corresponding 3D hand pose simultaneously. To evaluate the hand gesture recognition performance of the state-of-the-arts, we propose a challenging hand gesture recognition dataset collected in unconstrained environments. Experimental results show that, the gesture recognition accuracy of ours is significantly boosted by leveraging the knowledge learned from the hand pose estimation task.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Samy Bakheet ◽  
Ayoub Al-Hamadi

AbstractRobust vision-based hand pose estimation is highly sought but still remains a challenging task, due to its inherent difficulty partially caused by self-occlusion among hand fingers. In this paper, an innovative framework for real-time static hand gesture recognition is introduced, based on an optimized shape representation build from multiple shape cues. The framework incorporates a specific module for hand pose estimation based on depth map data, where the hand silhouette is first extracted from the extremely detailed and accurate depth map captured by a time-of-flight (ToF) depth sensor. A hybrid multi-modal descriptor that integrates multiple affine-invariant boundary-based and region-based features is created from the hand silhouette to obtain a reliable and representative description of individual gestures. Finally, an ensemble of one-vs.-all support vector machines (SVMs) is independently trained on each of these learned feature representations to perform gesture classification. When evaluated on a publicly available dataset incorporating a relatively large and diverse collection of egocentric hand gestures, the approach yields encouraging results that agree very favorably with those reported in the literature, while maintaining real-time operation.


The hand gesture detection problem is one of the most prominent problems in machine learning and computer vision applications. Many machine learning techniques have been employed to solve the hand gesture recognition. These techniques find applications in sign language recognition, virtual reality, human machine interaction, autonomous vehicles, driver assistive systems etc. In this paper, the goal is to design a system to correctly identify hand gestures from a dataset of hundreds of hand gesture images. In order to incorporate this, decision fusion based system using the transfer learning architectures is proposed to achieve the said task. Two pretrained models namely ‘MobileNet’ and ‘Inception V3’ are used for this purpose. To find the region of interest (ROI) in the image, YOLO (You Only Look Once) architecture is used which also decides the type of model. Edge map images and the spatial images are trained using two separate versions of the MobileNet based transfer learning architecture and then the final probabilities are combined to decide upon the hand sign of the image. The simulation results using classification accuracy indicate the superiority of the approach of this paper against the already researched approaches using different quantitative techniques such as classification accuracy.


Author(s):  
Ali Moin ◽  
Andy Zhou ◽  
Abbas Rahimi ◽  
Alisha Menon ◽  
Simone Benatti ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document