scholarly journals Hand Gesture Recognition and Voice Conversion for Hearing and Speech Aided Community

Author(s):  
Divya K V ◽  
Harish E ◽  
Nikhil Jain D ◽  
Nirdesh Reddy B

Sign language recognition (SLR) aims to interpret sign languages automatically by a computer in order to help the deaf communicate with hearing society conveniently. Our aim is to design a system to help the person who trained the hearing impaired to communicate with the rest of the world using sign language or hand gesture recognition techniques. In this system, feature detection and feature extraction of hand gesture is done with the help of Support Vector Machine (SVM), K-Neighbors-Classifier, Logistic-Regression, MLP-Classifier, Naive Bayes, Random-Forest-Classifier algorithms are using image processing.

Author(s):  
Srinivas K ◽  
Manoj Kumar Rajagopal

To recognize different hand gestures and achieve efficient classification to understand static and dynamic hand movements used for communications.Static and dynamic hand movements are first captured using gesture recognition devices including Kinect device, hand movement sensors, connecting electrodes, and accelerometers. These gestures are processed using hand gesture recognition algorithms such as multivariate fuzzy decision tree, hidden Markov models (HMM), dynamic time warping framework, latent regression forest, support vector machine, and surface electromyogram. Hand movements made by both single and double hands are captured by gesture capture devices with proper illumination conditions. These captured gestures are processed for occlusions and fingers close interactions for identification of right gesture and to classify the gesture and ignore the intermittent gestures. Real-time hand gestures recognition needs robust algorithms like HMM to detect only the intended gesture. Classified gestures are then compared for the effectiveness with training and tested standard datasets like sign language alphabets and KTH datasets. Hand gesture recognition plays a very important role in some of the applications such as sign language recognition, robotics, television control, rehabilitation, and music orchestration.


The aim is to present a real time system for hand gesture recognition on the basis of detection of some meaningful shape based feature like orientation, center of mass, status of fingers in term of raised or folded fingers of hand and their respective location in image. Hand gesture Recognition System has various real time applications in natural, innovative, user friendly way of how to interact with the computer which has more facilities that are familiar to us. Gesture recognition has a wide area of application including Human machine interaction, sign language, game technology robotics etc are some of the areas where Gesture recognition can be applied. More specifically hand gesture is used as a signal or input means given to the computer especially by disabled person. Being an interesting part of the human and computer interaction hand gesture recognition is needed for real life application, but complex of structures presents in human hand has a lot of challenges for being tracked and extracted. Making use of computer vision algorithms and gesture recognition techniques will result in developing low-cost interface devices using hand gestures for interacting with objects in virtual environment. SVM (support vector machine) and efficient feature extraction technique is presented for hand gesture recognition. This method deals with the dynamic aspects of hand gesture recognition system.


Author(s):  
Julakanti Likhitha Reddy ◽  
Bhavya Mallela ◽  
Lakshmi Lavanya Bannaravuri ◽  
Kotha Mohan Krishna

To interact with world using expressions or body movements is comparatively effective than just speaking. Gesture recognition can be a better way to convey meaningful information. Communication through gestures has been widely used by humans to express their thoughts and feelings. Gestures can be performed with any body part like head, face, hands and arms but most predominantly hand is use to perform gestures, Hand Gesture Recognition have been widely accepted for numerous applications such as human computer interactions, robotics, sign language recognition, etc. This paper focuses on bare hand gesture recognition system by proposing a scheme using a database-driven hand gesture recognition based upon skin color model approach and thresholding approach along with an effective template matching with can be effectively used for human robotics applications and similar other applications .Initially, hand region is segmented by applying skin color model in YCbCr color space. Y represents the luminance and Cb and Cr represents chrominance. In the next stage Otsu thresholding is applied to separate foreground and background. Finally, template based matching technique is developed using Principal Component Analysis (PCA), k-nearest neighbour (KNN) and Support Vector Machine (SVM) for recognition. KNN is used for statistical estimation and pattern recognition. SVM can be used for classification or regression problems.


The hand gesture detection problem is one of the most prominent problems in machine learning and computer vision applications. Many machine learning techniques have been employed to solve the hand gesture recognition. These techniques find applications in sign language recognition, virtual reality, human machine interaction, autonomous vehicles, driver assistive systems etc. In this paper, the goal is to design a system to correctly identify hand gestures from a dataset of hundreds of hand gesture images. In order to incorporate this, decision fusion based system using the transfer learning architectures is proposed to achieve the said task. Two pretrained models namely ‘MobileNet’ and ‘Inception V3’ are used for this purpose. To find the region of interest (ROI) in the image, YOLO (You Only Look Once) architecture is used which also decides the type of model. Edge map images and the spatial images are trained using two separate versions of the MobileNet based transfer learning architecture and then the final probabilities are combined to decide upon the hand sign of the image. The simulation results using classification accuracy indicate the superiority of the approach of this paper against the already researched approaches using different quantitative techniques such as classification accuracy.


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 4025
Author(s):  
Zhanjun Hao ◽  
Yu Duan ◽  
Xiaochao Dang ◽  
Yang Liu ◽  
Daiyang Zhang

In recent years, with the development of wireless sensing technology and the widespread popularity of WiFi devices, human perception based on WiFi has become possible, and gesture recognition has become an active topic in the field of human-computer interaction. As a kind of gesture, sign language is widely used in life. The establishment of an effective sign language recognition system can help people with aphasia and hearing impairment to better interact with the computer and facilitate their daily life. For this reason, this paper proposes a contactless fine-grained gesture recognition method using Channel State Information (CSI), namely Wi-SL. This method uses a commercial WiFi device to establish the correlation mapping between the amplitude and phase difference information of the subcarrier level in the wireless signal and the sign language action, without requiring the user to wear any device. We combine an efficient denoising method to filter environmental interference with an effective selection of optimal subcarriers to reduce the computational cost of the system. We also use K-means combined with a Bagging algorithm to optimize the Support Vector Machine (SVM) classification (KSB) model to enhance the classification of sign language action data. We implemented the algorithms and evaluated them for three different scenarios. The experimental results show that the average accuracy of Wi-SL gesture recognition can reach 95.8%, which realizes device-free, non-invasive, high-precision sign language gesture recognition.


2021 ◽  
Vol 5 (2 (113)) ◽  
pp. 44-54
Author(s):  
Chingiz Kenshimov ◽  
Samat Mukhanov ◽  
Timur Merembayev ◽  
Didar Yedilkhan

For people with disabilities, sign language is the most important means of communication. Therefore, more and more authors of various papers and scientists around the world are proposing solutions to use intelligent hand gesture recognition systems. Such a system is aimed not only for those who wish to understand a sign language, but also speak using gesture recognition software. In this paper, a new benchmark dataset for Kazakh fingerspelling, able to train deep neural networks, is introduced. The dataset contains more than 10122 gesture samples for 42 alphabets. The alphabet has its own peculiarities as some characters are shown in motion, which may influence sign recognition. Research and analysis of convolutional neural networks, comparison, testing, results and analysis of LeNet, AlexNet, ResNet and EffectiveNet – EfficientNetB7 methods are described in the paper. EffectiveNet architecture is state-of-the-art (SOTA) and is supposed to be a new one compared to other architectures under consideration. On this dataset, we showed that the LeNet and EffectiveNet networks outperform other competing algorithms. Moreover, EffectiveNet can achieve state-of-the-art performance on nother hand gesture datasets. The architecture and operation principle of these algorithms reflect the effectiveness of their application in sign language recognition. The evaluation of the CNN model score is conducted by using the accuracy and penalty matrix. During training epochs, LeNet and EffectiveNet showed better results: accuracy and loss function had similar and close trends. The results of EffectiveNet were explained by the tools of the SHapley Additive exPlanations (SHAP) framework. SHAP explored the model to detect complex relationships between features in the images. Focusing on the SHAP tool may help to further improve the accuracy of the model


2020 ◽  
Vol 7 (2) ◽  
pp. 164
Author(s):  
Aditiya Anwar ◽  
Achmad Basuki ◽  
Riyanto Sigit

<p><em>Hand gestures are the communication ways for the deaf people and the other. Each hand gesture has a different meaning.  In order to better communicate, we need an automatic translator who can recognize hand movements as a word or sentence in communicating with deaf people. </em><em>This paper proposes a system to recognize hand gestures based on Indonesian Sign Language Standard. This system uses Myo Armband as hand gesture sensors. Myo Armband has 21 sensors to express the hand gesture data. Recognition process uses a Support Vector Machine (SVM) to classify the hand gesture based on the dataset of Indonesian Sign Language Standard. SVM yields the accuracy of 86.59% to recognize hand gestures as sign language.</em></p><p><em><strong>Keywords</strong></em><em>: </em><em>Hand Gesture Recognition, Feature Extraction, Indonesian Sign Language, Myo Armband, Moment Invariant</em></p>


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5282 ◽  
Author(s):  
Adam Ahmed Qaid MOHAMMED ◽  
Jiancheng Lv ◽  
MD. Sajjatul Islam

Recent research on hand detection and gesture recognition has attracted increasing interest due to its broad range of potential applications, such as human-computer interaction, sign language recognition, hand action analysis, driver hand behavior monitoring, and virtual reality. In recent years, several approaches have been proposed with the aim of developing a robust algorithm which functions in complex and cluttered environments. Although several researchers have addressed this challenging problem, a robust system is still elusive. Therefore, we propose a deep learning-based architecture to jointly detect and classify hand gestures. In the proposed architecture, the whole image is passed through a one-stage dense object detector to extract hand regions, which, in turn, pass through a lightweight convolutional neural network (CNN) for hand gesture recognition. To evaluate our approach, we conducted extensive experiments on four publicly available datasets for hand detection, including the Oxford, 5-signers, EgoHands, and Indian classical dance (ICD) datasets, along with two hand gesture datasets with different gesture vocabularies for hand gesture recognition, namely, the LaRED and TinyHands datasets. Here, experimental results demonstrate that the proposed architecture is efficient and robust. In addition, it outperforms other approaches in both the hand detection and gesture classification tasks.


Sign in / Sign up

Export Citation Format

Share Document