scholarly journals Translation of Sign Language for Deaf and Dumb People

2020 ◽  
Vol 8 (5) ◽  
pp. 5592-5595

Deaf-mute people can communicate with normal people with help of sign languages. Our project objective is to analyse and translate the sign language that is hand gestures into text and voice. For this process, RealTimeImage made by deafmute peopleiscapturedanditisgivenasinput to the pre-processor. Then, feature extraction process by using otsu’s algorithm and classification by using SVM(support Vector Machine) can be done. After the text for corresponding sign has been produced. The obtained text is converted into voice with use of MATLAB function. Thus hand gestures made by deaf-mute people has been analysed and translated into text and voice for better communication.

There are a lot of people who have many disabilities in our world out of which,people who are deaf and dumb cannot convey there messages to the normal people. Conversation becomes very difficult for this people. Deaf people cannot understand and hear what normal people is going to convey ,similarly dumb people need to convey their message using sign languages where normal people cannot understand unless he/she knows or understands the sign language. This brings to a need of an application which can be useful for having conversation between deaf,dumb and normal people. Here we are using hand gestures of Indian sign language (ISL) which contain all the alphabets and 0-9 digit gestures. The dataset of alphabets and digits is created by us.After dataset building we extracted the features using bagof- words and image preprocessing.With the feature extraction, histograms are been generated which maps alphabets to images. Finally, these features are fed to the supervised machine learning model to predict the gesture/sign. We did also use CNN model for training the model.


Author(s):  
Rashmi K. Thakur ◽  
Manojkumar V. Deshpande

Sentiment analysis is one of the popular techniques gaining attention in recent times. Nowadays, people gain information on reviews of users regarding public transportation, movies, hotel reservation, etc., by utilizing the resources available, as they meet their needs. Hence, sentiment classification is an essential process employed to determine the positive and negative responses. This paper presents an approach for sentiment classification of train reviews using MapReduce model with the proposed Kernel Optimized-Support Vector Machine (KO-SVM) classifier. The MapReduce framework handles big data using a mapper, which performs feature extraction and reducer that classifies the review based on KO-SVM classification. The feature extraction process utilizes features that are classification-specific and SentiWordNet-based. KO-SVM adopts SVM for the classification, where the exponential kernel is replaced by an optimized kernel, finding the weights using a novel optimizer, Self-adaptive Lion Algorithm (SLA). In a comparative analysis, the performance of KO-SVM classifier is compared with SentiWordNet, NB, NN, and LSVM, using the evaluation metrics, specificity, sensitivity, and accuracy, with train review and movie review database. The proposed KO-SVM classifier could attain maximum sensitivity of 93.46% and 91.249% specificity of 74.485% and 70.018%; and accuracy of 84.341% and 79.611% respectively, for train review and movie review databases.


2020 ◽  
Vol 7 (2) ◽  
pp. 164
Author(s):  
Aditiya Anwar ◽  
Achmad Basuki ◽  
Riyanto Sigit

<p><em>Hand gestures are the communication ways for the deaf people and the other. Each hand gesture has a different meaning.  In order to better communicate, we need an automatic translator who can recognize hand movements as a word or sentence in communicating with deaf people. </em><em>This paper proposes a system to recognize hand gestures based on Indonesian Sign Language Standard. This system uses Myo Armband as hand gesture sensors. Myo Armband has 21 sensors to express the hand gesture data. Recognition process uses a Support Vector Machine (SVM) to classify the hand gesture based on the dataset of Indonesian Sign Language Standard. SVM yields the accuracy of 86.59% to recognize hand gestures as sign language.</em></p><p><em><strong>Keywords</strong></em><em>: </em><em>Hand Gesture Recognition, Feature Extraction, Indonesian Sign Language, Myo Armband, Moment Invariant</em></p>


2020 ◽  
Vol 5 (2) ◽  
pp. 504
Author(s):  
Matthias Omotayo Oladele ◽  
Temilola Morufat Adepoju ◽  
Olaide ` Abiodun Olatoke ◽  
Oluwaseun Adewale Ojo

Yorùbá language is one of the three main languages that is been spoken in Nigeria. It is a tonal language that carries an accent on the vowel alphabets. There are twenty-five (25) alphabets in Yorùbá language with one of the alphabets a digraph (GB). Due to the difficulty in typing handwritten Yorùbá documents, there is a need to develop a handwritten recognition system that can convert the handwritten texts to digital format. This study discusses the offline Yorùbá handwritten word recognition system (OYHWR) that recognizes Yorùbá uppercase alphabets. Handwritten characters and words were obtained from different writers using the paint application and M708 graphics tablets. The characters were used for training and the words were used for testing. Pre-processing was done on the images and the geometric features of the images were extracted using zoning and gradient-based feature extraction. Geometric features are the different line types that form a particular character such as the vertical, horizontal, and diagonal lines. The geometric features used are the number of horizontal lines, number of vertical lines, number of right diagonal lines, number of left diagonal lines, total length of all horizontal lines, total length of all vertical lines, total length of all right slanting lines, total length of all left-slanting lines and the area of the skeleton. The characters are divided into 9 zones and gradient feature extraction was used to extract the horizontal and vertical components and geometric features in each zone. The words were fed into the support vector machine classifier and the performance was evaluated based on recognition accuracy. Support vector machine is a two-class classifier, hence a multiclass SVM classifier least square support vector machine (LSSVM) was used for word recognition. The one vs one strategy and RBF kernel were used and the recognition accuracy obtained from the tested words ranges between 66.7%, 83.3%, 85.7%, 87.5%, and 100%. The low recognition rate for some of the words could be as a result of the similarity in the extracted features.


Sign in / Sign up

Export Citation Format

Share Document