scholarly journals Template Matching Based Sign Language Recognition System for Android Devices

2020 ◽  
Vol 5 (1) ◽  
Author(s):  
Kudirat O Jimoh ◽  
Anuoluwapo O Ajayi ◽  
Ibrahim K Ogundoyin

An android based sign language recognition system for selected English vocabularies was developed with the explicit objective to examine the specific characteristics that are responsible for gestures recognition. Also, a recognition model for the process was designed, implemented, and evaluated on 230 samples of hand gestures.  The collected samples were pre-processed and rescaled from 3024 ×4032 pixels to 245 ×350 pixels. The samples were examined for the specific characteristics using Oriented FAST and Rotated BRIEF, and the Principal Component Analysis used for feature extraction. The model was implemented in Android Studio using the template matching algorithm as its classifier. The performance of the system was evaluated using precision, recall, and accuracy as metrics. It was observed that the system obtained an average classification rate of 87%, an average precision value of 88% and 91% for the average recall rate on the test data of hand gestures.  The study, therefore, has successfully classified hand gestures for selected English vocabularies. The developed system will enhance the communication skills between hearing and hearing-impaired people, and also aid their teaching and learning processes. Future work include exploring state-of-the-art machining learning techniques such Generative Adversarial Networks (GANs) for large dataset to improve the accuracy of results. Keywords— Feature extraction; Gestures Recognition; Sign Language; Vocabulary, Android device.

Author(s):  
Asra Abdolmalaki ◽  
Abdulbaghi Ghaderzadeh ◽  
Vafa Maihami

: The sign language is the main communication method of deaf persons with ordinary people. Ordinary people learn and understand the written language by using the visual representation of the language. Instead there is no correspondence between speech and writing for the deaf persons and the letters are only symbols that have no meaning for them. Since most ordinary people are not familiar with the sign language, a sign language recognition system can be useful in symptoms recognition. In this paper, a novel approach is presented to identify Persian static symptoms. The proposed sign language recognition system consists of two segmentation and feature extraction phases. In the segmentation phase, the hand region is separated by an effective segmentation method from the original image at first. This method is based on the unique Gaussian model in the YCbCr color space. The Bayes rule is used to precisely identify the hand region too. In the feature extraction phase, the radial model is used to obtain a one-dimensional function to display the hand region boundary and to compute the combined feature vector. In order to normalize this method, the Fourier Transformation method is applied. The proposed system does not use any kind of gloves or sensors. The system was trained and tested using 480 image samples of Persian sign language characters, 15 images per sign, with the .jpg extension. Extensive experimental evaluations indicate that the proposed recognition system is less susceptible to displacement, scale, and rotation, and can detect symptoms at an accuracy of 95.62%.


Author(s):  
Wijayanti Nurul Khotimah ◽  
Nanik Suciati ◽  
Tiara Anggita

Sign Language Recognition System (SLRS) is a system to recognise sign language and then translate them into text. This system can be developed by using a sensor-based technique. Some studies have implemented various feature extraction and classification methods to recognise sign language in the different country. However, their systems were user dependent (the accuracy was high when the trained and the tested user were the same people, but it was getting worse when the tested user was different to the trained user). Therefore in this study, we proposed a feature extraction method which is invariant to a user. We used the distance between two users’ skeleton instead of using the users’ skeleton positions because the skeleton distance is independent to the user posture. Finally, forty-five features were extracted in this proposed method. Further, we classified the features by using a classification method that is suitable with sign language gestures characteristic (time-dependent sequence data). The classification method is Dynamic Time Wrapping. For the experiment, we used twenty Indonesian sign languages from different semantic groups (greetings, questions, pronouns, places, family and others) and different gesture characteristic (static gesture and dynamic gesture). Then the system was tested by a different user with the user who did the training. The result was promising, this proposed method produced high accuracy, reach 91% which shows that this proposed method is user independent.


2019 ◽  
Vol 7 (2) ◽  
pp. 43
Author(s):  
MALHOTRA POOJA ◽  
K. MANIAR CHIRAG ◽  
V. SANKPAL NIKHIL ◽  
R. THAKKAR HARDIK ◽  
◽  
...  

2020 ◽  
Vol 14 ◽  
Author(s):  
Vasu Mehra ◽  
Dhiraj Pandey ◽  
Aayush Rastogi ◽  
Aditya Singh ◽  
Harsh Preet Singh

Background:: People suffering from hearing and speaking disabilities have a few ways of communicating with other people. One of these is to communicate through the use of sign language. Objective:: Developing a system for sign language recognition becomes essential for deaf as well as a mute person. The recognition system acts as a translator between a disabled and an able person. This eliminates the hindrances in exchange of ideas. Most of the existing systems are very poorly designed with limited support for the needs of their day to day facilities. Methods:: The proposed system embedded with gesture recognition capability has been introduced here which extracts signs from a video sequence and displays them on screen. On the other hand, a speech to text as well as text to speech system is also introduced to further facilitate the grieved people. To get the best out of human computer relationship, the proposed solution consists of various cutting-edge technologies and Machine Learning based sign recognition models which have been trained by using Tensor Flow and Keras library. Result:: The proposed architecture works better than several gesture recognition techniques like background elimination and conversion to HSV because of sharply defined image provided to the model for classification. The results of testing indicate reliable recognition systems with high accuracy that includes most of the essential and necessary features for any deaf and dumb person in his/her day to day tasks. Conclusion:: It’s the need of current technological advances to develop reliable solutions which can be deployed to assist deaf and dumb people to adjust to normal life. Instead of focusing on a standalone technology, a plethora of them have been introduced in this proposed work. Proposed Sign Recognition System is based on feature extraction and classification. The trained model helps in identification of different gestures.


Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


Sign in / Sign up

Export Citation Format

Share Document