Gesture Recognition

Author(s):  
Nandhini Kesavan ◽  
Raajan N. R.

The main objective of gesture recognition is to promote the technology behind the automation of registered gesture with a fusion of multidimensional data in a versatile manner. To achieve this goal, computers should be able to visually recognize hand gestures from video input. However, vision-based hand tracking and gesture recognition is an extremely challenging problem due to the complexity of hand gestures, which are rich in diversities due to high degrees of freedom involved by the human hand. This would make the world a better place with for the commons not only to live in, but also to communicate with ease. This research work would serve as a pharos to researchers in the field of smart vision and would immensely help the society in a versatile manner.

Author(s):  
Tianyun Yuan ◽  
Yu Song ◽  
Gerald A. Kraan ◽  
Richard HM Goossens

Abstract Measuring the motions of human hand joints is often a challenge due to the high number of degrees of freedom. In this study, we proposed a hand tracking system utilizing action cameras and ArUco markers to continuously measure the rotation angles of hand joints. Three methods were developed to estimate the joint rotation angles. The pos-based method transforms marker positions to a reference coordinate system (RCS) and extracts a hand skeleton to identify the rotation angles. Similarly, the orient-x-based method calculates the rotation angles from the transformed x-orientations of the detected markers in the RCS. In contrast, the orient-mat-based method first identifies the rotation angles in each camera coordinate system using the detected orientations, and then, synthesizes the results regarding each joint. Experiment results indicated that the repeatability errors with one camera regarding different marker sizes were around 2.64 to 27.56 degrees and 0.60 to 2.36 degrees using the marker positions and orientations respectively. When multiple cameras were employed to measure the joint rotation angles, the angles measured by using the three methods were comparable with that measured by a goniometer. Despite larger deviations occurred when using the pos-based method. Further analysis indicated that the results of using the orient-mat-based method can describe more types of joint rotations, and the effectiveness of this method was verified by capturing hand movements of several participants. Thus it is recommended for measuring joint rotation angles in practical setups.


Author(s):  
Pranjali Manmode ◽  
Rupali Saha ◽  
Manisha N. Amnerkar

With the rapid development of computer vision, the demand for interaction between humans and machines is becoming more and more extensive. Since hand gestures can express enriched information, hand gesture recognition is widely used in robot control, intelligent furniture, and other aspects. The paper realizes the segmentation of hand gestures by establishing the skin color model and AdaBoost classifier based on haar according to the particularity of skin color for hand gestures and the denaturation of hand gestures with one frame of video being cut for analysis. In this regard, the human hand is segmented from a complicated background. The camshaft algorithm also realizes real-time hand gesture tracking. Then, the area of hand gestures detected in real-time is recognized by a convolutional neural network to discover the recognition of 10 common digits. Experiments show 98.3% accuracy.


Sensors ◽  
2019 ◽  
Vol 19 (11) ◽  
pp. 2562 ◽  
Author(s):  
Hobeom Han ◽  
Sang Won Yoon

Human hand gestures are a widely accepted form of real-time input for devices providing a human-machine interface. However, hand gestures have limitations in terms of effectively conveying the complexity and diversity of human intentions. This study attempted to address these limitations by proposing a multi-modal input device, based on the observation that each application program requires different user intentions (and demanding functions) and the machine already acknowledges the running application. When the running application changes, the same gesture now offers a new function required in the new application, and thus, we can greatly reduce the number and complexity of required hand gestures. As a simple wearable sensor, we employ one miniature wireless three-axis gyroscope, the data of which are processed by correlation analysis with normalized covariance for continuous gesture recognition. Recognition accuracy is improved by considering both gesture patterns and signal strength and by incorporating a learning mode. In our system, six unit hand gestures successfully provide most functions offered by multiple input devices. The characteristics of our approach are automatically adjusted by acknowledging the application programs or learning user preferences. In three application programs, the approach shows good accuracy (90–96%), which is very promising in terms of designing a unified solution. Furthermore, the accuracy reaches 100% as the users become more familiar with the system.


2015 ◽  
Vol 2015 ◽  
pp. 1-15
Author(s):  
Weihua Liu ◽  
Yangyu Fan ◽  
Zuhe Li ◽  
Zhong Zhang

The task of human hand trajectory tracking and gesture trajectory recognition based on synchronized color and depth video is considered. Toward this end, in the facet of hand tracking, a joint observation model with the hand cues of skin saliency, motion and depth is integrated into particle filter in order to move particles to local peak in the likelihood. The proposed hand tracking method, namely, salient skin, motion, and depth based particle filter (SSMD-PF), is capable of improving the tracking accuracy considerably, in the context of the signer performing the gesture toward the camera device and in front of moving, cluttered backgrounds. In the facet of gesture recognition, a shape-order context descriptor on the basis of shape context is introduced, which can describe the gesture in spatiotemporal domain. The efficient shape-order context descriptor can reveal the shape relationship and embed gesture sequence order information into descriptor. Moreover, the shape-order context leads to a robust score for gesture invariant. Our approach is complemented with experimental results on the settings of the challenging hand-signed digits datasets and American sign language dataset, which corroborate the performance of the novel techniques.


2021 ◽  
Vol 10 (4) ◽  
pp. 2223-2230
Author(s):  
Aseel Ghazi Mahmoud ◽  
Ahmed Mudheher Hasan ◽  
Nadia Moqbel Hassan

Recently, the recognition of human hand gestures is becoming a valuable technology for various applications like sign language recognition, virtual games and robotics control, video surveillance, and home automation. Owing to the recent development of deep learning and its excellent performance, deep learning-based hand gesture recognition systems can provide promising results. However, accurate recognition of hand gestures remains a substantial challenge that faces most of the recently existing recognition systems. In this paper, convolutional neural networks (CNN) framework with multiple layers for accurate, effective, and less complex human hand gesture recognition has been proposed. Since the images of the infrared hand gestures can provide accurate gesture information through the low illumination environment, the proposed system is tested and evaluated on a database of hand-based near-infrared which including ten gesture poses. Extensive experiments prove that the proposed system provides excellent results of accuracy, precision, sensitivity (recall), and F1-score. Furthermore, a comparison with recently existing systems is reported.


2015 ◽  
Vol 786 ◽  
pp. 378-382 ◽  
Author(s):  
Megalingam Rajesh Kannan ◽  
Menon Deepansh ◽  
Ajithkumar Nitin ◽  
Saboo Nihil

This research work is targeted at building and analyzing a robotic arm which mimics the motion of the human arm of the user. The propsed system monitors the motion of the user’s arm using a Kinect. Using the “Kinect Skeletal Image” project of Kinect SDK, a skeletal image of the arm is obtained which consists of 3 joints and links connecting them. 3-D Coordinate Geometry techniques are used to compute the angles obtained between the links. This corresponds to the angles made by the different segments of the human arm. In this work we present the capturing of human hand gestures by Kinect and analyzing it with suitable algorithms to identify the joints and angles. Also the arduino based microcontroller used for processing Kinect data is presented.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 137
Author(s):  
Larisa Dunai ◽  
Martin Novak ◽  
Carmen García Espert

The present paper describes the development of a prosthetic hand based on human hand anatomy. The hand phalanges are printed with 3D printing with Polylactic Acid material. One of the main contributions is the investigation on the prosthetic hand joins; the proposed design enables one to create personalized joins that provide the prosthetic hand a high level of movement by increasing the degrees of freedom of the fingers. Moreover, the driven wire tendons show a progressive grasping movement, being the friction of the tendons with the phalanges very low. Another important point is the use of force sensitive resistors (FSR) for simulating the hand touch pressure. These are used for the grasping stop simulating touch pressure of the fingers. Surface Electromyogram (EMG) sensors allow the user to control the prosthetic hand-grasping start. Their use may provide the prosthetic hand the possibility of the classification of the hand movements. The practical results included in the paper prove the importance of the soft joins for the object manipulation and to get adapted to the object surface. Finally, the force sensitive sensors allow the prosthesis to actuate more naturally by adding conditions and classifications to the Electromyogram sensor.


Author(s):  
Hezhen Hu ◽  
Wengang Zhou ◽  
Junfu Pu ◽  
Houqiang Li

Sign language recognition (SLR) is a challenging problem, involving complex manual features (i.e., hand gestures) and fine-grained non-manual features (NMFs) (i.e., facial expression, mouth shapes, etc .). Although manual features are dominant, non-manual features also play an important role in the expression of a sign word. Specifically, many sign words convey different meanings due to non-manual features, even though they share the same hand gestures. This ambiguity introduces great challenges in the recognition of sign words. To tackle the above issue, we propose a simple yet effective architecture called Global-Local Enhancement Network (GLE-Net), including two mutually promoted streams toward different crucial aspects of SLR. Of the two streams, one captures the global contextual relationship, while the other stream captures the discriminative fine-grained cues. Moreover, due to the lack of datasets explicitly focusing on this kind of feature, we introduce the first non-manual-feature-aware isolated Chinese sign language dataset (NMFs-CSL) with a total vocabulary size of 1,067 sign words in daily life. Extensive experiments on NMFs-CSL and SLR500 datasets demonstrate the effectiveness of our method.


Sign in / Sign up

Export Citation Format

Share Document