Real-time hand gesture recognition with EMG using machine learning

Author(s):  
Andres G. Jaramillo ◽  
Marco E. Benalcazar
Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2467 ◽  
Author(s):  
Andrés Jaramillo-Yánez ◽  
Marco E. Benalcázar ◽  
Elisa Mena-Maldonado

Today, daily life is composed of many computing systems, therefore interacting with them in a natural way makes the communication process more comfortable. Human–Computer Interaction (HCI) has been developed to overcome the communication barriers between humans and computers. One form of HCI is Hand Gesture Recognition (HGR), which predicts the class and the instant of execution of a given movement of the hand. One possible input for these models is surface electromyography (EMG), which records the electrical activity of skeletal muscles. EMG signals contain information about the intention of movement generated by the human brain. This systematic literature review analyses the state-of-the-art of real-time hand gesture recognition models using EMG data and machine learning. We selected and assessed 65 primary studies following the Kitchenham methodology. Based on a common structure of machine learning-based systems, we analyzed the structure of the proposed models and standardized concepts in regard to the types of models, data acquisition, segmentation, preprocessing, feature extraction, classification, postprocessing, real-time processing, types of gestures, and evaluation metrics. Finally, we also identified trends and gaps that could open new directions of work for future research in the area of gesture recognition using EMG.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Samy Bakheet ◽  
Ayoub Al-Hamadi

AbstractRobust vision-based hand pose estimation is highly sought but still remains a challenging task, due to its inherent difficulty partially caused by self-occlusion among hand fingers. In this paper, an innovative framework for real-time static hand gesture recognition is introduced, based on an optimized shape representation build from multiple shape cues. The framework incorporates a specific module for hand pose estimation based on depth map data, where the hand silhouette is first extracted from the extremely detailed and accurate depth map captured by a time-of-flight (ToF) depth sensor. A hybrid multi-modal descriptor that integrates multiple affine-invariant boundary-based and region-based features is created from the hand silhouette to obtain a reliable and representative description of individual gestures. Finally, an ensemble of one-vs.-all support vector machines (SVMs) is independently trained on each of these learned feature representations to perform gesture classification. When evaluated on a publicly available dataset incorporating a relatively large and diverse collection of egocentric hand gestures, the approach yields encouraging results that agree very favorably with those reported in the literature, while maintaining real-time operation.


Author(s):  
Priyanshi Gupta ◽  
Amita Goel ◽  
Nidhi Sengar ◽  
Vashudha Bahl

Hand gesture is language through which normal people can communicate with deaf and dumb people. Hand gesture recognition detects the hand pose and converts it to the corresponding alphabet or sentence. In past years it received great attention from society because of its application. It uses machine learning algorithms. Hand gesture recognition is a great application of human computer interaction. An emerging research field that is based on human centered computing aims to understand human gestures and integrate users and their social context with computer systems. One of the unique and challenging applications in this framework is to collect information about human dynamic gestures. Keywords: Tensor Flow, Machine learning, React js, handmark model, media pipeline


Sign in / Sign up

Export Citation Format

Share Document