scholarly journals Design and Development of IoT Device that Recognizes Hand Gestures using Sensors

Author(s):  
Santosh Kumar J, Vamsi, Vinod, Madhusudhan and Tejas

A hand gesture is a non-verbal means of communication involving the motion of fingers to convey information. Hand gestures are used in sign language and are a way of communication for deaf and mute people and also implemented to control devices too. The purpose of gesture recognition in devices has always been providing the gap between the physical world and the digital world. The way humans interact among themselves with the digital world could be implemented via gestures using algorithms. Gestures can be tracked using gyroscope, accelerometers, and more as well. So, in this project we aim to provide an electronic method for hand gesture recognition that is cost-effective, this system makes use of flex sensors, ESP32 board. A flex sensor works on the principle of change in the internal resistance to detect the angle made by the user’s finger at any given time. The flexes made by hand in different combinations amount to a gesture and this gesture can be converted into signals or as a text display on the screen. A smart glove is designed which is equipped with custom-made flex sensors that detect the gestures and convert them to text and an ESP32 board, the component used to supplement the gestures detected by a flex sensor. This helps in identifying machines the human sign language and perform the task or identify a word through hand gestures and respond according to it.

2016 ◽  
Vol 11 (1) ◽  
pp. 30-35
Author(s):  
Manoj Acharya ◽  
Dibakar Raj Pant

This paper proposes a method to recognize static hand gestures in an image or video where a person is performing Nepali Sign Language (NSL) and translate it to words and sentences. The classification is carried out using Neural Network where contour of the hand is used as the feature. The work is verified successfully for NSL recognition using signer dependency analysis. Journal of the Institute of Engineering, 2015, 11(1): 30-35


2020 ◽  
Vol 7 (2) ◽  
pp. 164
Author(s):  
Aditiya Anwar ◽  
Achmad Basuki ◽  
Riyanto Sigit

<p><em>Hand gestures are the communication ways for the deaf people and the other. Each hand gesture has a different meaning.  In order to better communicate, we need an automatic translator who can recognize hand movements as a word or sentence in communicating with deaf people. </em><em>This paper proposes a system to recognize hand gestures based on Indonesian Sign Language Standard. This system uses Myo Armband as hand gesture sensors. Myo Armband has 21 sensors to express the hand gesture data. Recognition process uses a Support Vector Machine (SVM) to classify the hand gesture based on the dataset of Indonesian Sign Language Standard. SVM yields the accuracy of 86.59% to recognize hand gestures as sign language.</em></p><p><em><strong>Keywords</strong></em><em>: </em><em>Hand Gesture Recognition, Feature Extraction, Indonesian Sign Language, Myo Armband, Moment Invariant</em></p>


Hand gesture recognition is challenging task in machine vision due to similarity between inter class samples and high amount of variation in intra class samples. The gesture recognition independent of light intensity, independent of color has drawn some attention due to its requirement where system should perform during night time also. This paper provides an insight into dynamic hand gesture recognition using depth data and images collected from time of flight camera. It provides user interface to track down natural gestures. The area of interest and hand area is first segmented out using adaptive thresholding and region labeling. It is assumed that hand is the closet object to camera. A novel algorithm is proposed to segment the hand region only. The noise due to ToF camera measurement is eliminated by preprocessing algorithms. There are two algorithms which we have proposed for extracting the hand gestures features. The first algorithm is based on computing the region distance between the fingers and second one is about computing the shape descriptor of gesture boundary in radial fashion from the centroid of hand gestures. For matching the gesture the distance between two independent regions is computed for every row and column. Same process is repeated across the columns. The number of total region transitions are computed for every row and column. These number of transitions across rows and columns forms the feature vector. The proposed solution is easily able to deal with static and dynamic gestures. In case of second approach we compute the distance between the gesture centroid and shape boundaries at various angles from 0 to 360 degrees. These distances forms the feature vector. Comparison of result shows that this method is very effective in extracting the shape features and competent enough in terms of accuracy and speed. The gesture recognition algorithm mentioned in this paper can be used in automotive infotainment systems, consumer electronics where hardware needs to be cost effective and the response of the system should be fast enough.


Author(s):  
U. Mamatha

As sign language is used by deaf and dumb but the non-sign-language speaker cannot understand there sign language to overcome the problem we proposed this system using python. In this first we taken the some of the hand gestures are captured using the web camera. The image is pre-processed and then feature are extracted from the captured image .comparing the feature extracted image with the reference image. If matched decision is taken the displayed as a text. This helps the non-sign-language members to recognize easily by using Convolutional neural network layer (CNN) with tensor flow


Author(s):  
K M Bilvika ◽  
Sneha B K ◽  
Sahana K M ◽  
Tejaswini S M Patil

In human-computer interaction or sign language interpretation, recognizing hand gestures and face detection become predominant in computer vision research. The primary goal of this proposed system is to create a system, which can identify hand gestures and facial detection to convey information for controlling media player. For those who are deaf and dumb sign language is a common, efficient and alternative way for talking, by using the hand and facial gestures we can easily understand them. Here hand and face are directly use as the input to the device for effective communication purpose of gesture identification there is no need of an intermediate medium.


Author(s):  
Srinivas K ◽  
Manoj Kumar Rajagopal

To recognize different hand gestures and achieve efficient classification to understand static and dynamic hand movements used for communications.Static and dynamic hand movements are first captured using gesture recognition devices including Kinect device, hand movement sensors, connecting electrodes, and accelerometers. These gestures are processed using hand gesture recognition algorithms such as multivariate fuzzy decision tree, hidden Markov models (HMM), dynamic time warping framework, latent regression forest, support vector machine, and surface electromyogram. Hand movements made by both single and double hands are captured by gesture capture devices with proper illumination conditions. These captured gestures are processed for occlusions and fingers close interactions for identification of right gesture and to classify the gesture and ignore the intermittent gestures. Real-time hand gestures recognition needs robust algorithms like HMM to detect only the intended gesture. Classified gestures are then compared for the effectiveness with training and tested standard datasets like sign language alphabets and KTH datasets. Hand gesture recognition plays a very important role in some of the applications such as sign language recognition, robotics, television control, rehabilitation, and music orchestration.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 2624-2629

As an assorted nation with numerous religion tongues, India has attempted to embrace an official, institutionalized gesture based communication. Where as in Indo-Pakistani communication through signing, is viewed as the prevalent sort utilized in South Asia. Many who are deaf or hard of hearing rely on sign language, to communicate. However the estimation of sign language are very unsophisticated and definitions of what counts as proficiency that varies depends on many factors. There are many existing systems which use shape parameters like orientation, palm centroid ,data gloves with 5 accelerometer sensors , and optical markers which reflect infrared light to recognise hand gestures of sign language. Background subtraction techniques used in these systems are K-means clustering ,boundary counters, Eigen backgrounds using Eigen values and wireless technology and bluetooth for connecting software for transmitting recognised hand gesture signals. They are not cost effective but, the accuracy is not met to the need. Whereas, In our proposed system we concentrate mainly to convert hand gestures to text using contour tracing technique to recognise hand gestures using normal webcam. The semantics are classified by support vector machine with trained datasets. The recognised hand gestures are displayed as text. Our main objective is to resolve the problem of facing interviewer for vocally impaired individuals. This helps them to build their confidence and eradicate their inferiority complex compared to other methods


Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


Sign in / Sign up

Export Citation Format

Share Document