scholarly journals Spelling Correction Real-Time American Sign Language Alphabet Translation System Based on YOLO Network and LSTM

Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1035
Author(s):  
Miguel Rivera-Acosta ◽  
Juan Manuel Ruiz-Varela ◽  
Susana Ortega-Cisneros ◽  
Jorge Rivera ◽  
Ramón Parra-Michel ◽  
...  

In this paper, we present a novel approach that aims to solve one of the main challenges in hand gesture recognition tasks in static images, to compensate for the accuracy lost when trained models are used to interpret completely unseen data. The model presented here consists of two main data-processing stages. A deep neural network (DNN) for performing handshape segmentation and classification is used in which multiple architectures and input image sizes were tested and compared to derive the best model in terms of accuracy and processing time. For the experiments presented in this work, the DNN models were trained with 24,000 images of 24 signs from the American Sign Language alphabet and fine-tuned with 5200 images of 26 generated signs. The system was real-time tested with a community of 10 persons, yielding a mean average precision and processing rate of 81.74% and 61.35 frames-per-second, respectively. As a second data-processing stage, a bidirectional long short-term memory neural network was implemented and analyzed for adding spelling correction capability to our system, which scored a training accuracy of 98.07% with a dictionary of 370 words, thus, increasing the robustness in completely unseen data, as shown in our experiments.

TEM Journal ◽  
2020 ◽  
pp. 937-943
Author(s):  
Rasha Amer Kadhim ◽  
Muntadher Khamees

In this paper, a real-time ASL recognition system was built with a ConvNet algorithm using real colouring images from a PC camera. The model is the first ASL recognition model to categorize a total of 26 letters, including (J & Z), with two new classes for space and delete, which was explored with new datasets. It was built to contain a wide diversity of attributes like different lightings, skin tones, backgrounds, and a wide variety of situations. The experimental results achieved a high accuracy of about 98.53% for the training and 98.84% for the validation. As well, the system displayed a high accuracy for all the datasets when new test data, which had not been used in the training, were introduced.


2021 ◽  
Author(s):  
Bhavadharshini M ◽  
Josephine Racheal J ◽  
Kamali M ◽  
Sankar S ◽  
Bhavadharshini M

Sign language is a terminology that encloses a motion of hand gestures which is an environment for the auditory impairment, individual (deaf or dumb) to deal with others. Nevertheless, so as to impart with the hearing impaired individual, the communicator obtains to acquire acquaintance in sign language. As follows is frequent to make undoubted that the message provided by the hearing impaired person acknowledged. This implemented system propounds an implementation of real time American Sign Language perception in Convolutional Neural Network (CNN) with the support of You Only Look Once version (YOLO) algorithm. The algorithm initially executes data acquisition, subsequently the pre-processing of gestures and are conducted to trace hand movement utilize a combinational algorithm.


2018 ◽  
Vol 21 (6) ◽  
pp. e12672 ◽  
Author(s):  
Kyle MacDonald ◽  
Todd LaMarr ◽  
David Corina ◽  
Virginia A. Marchman ◽  
Anne Fernald

Teknik ◽  
2021 ◽  
Vol 42 (2) ◽  
pp. 137-148
Author(s):  
Vincentius Abdi Gunawan ◽  
Leonardus Sandy Ade Putra

Communication is essential in conveying information from one individual to another. However, not all individuals in the world can communicate verbally. According to WHO, deafness is a hearing loss that affects 466 million people globally, and 34 million are children. So it is necessary to have a non-verbal language learning method for someone who has hearing problems. The purpose of this study is to build a system that can identify non-verbal language so that it can be easily understood in real-time. A high success rate in the system needs a proper method to be applied in the system, such as machine learning supported by wavelet feature extraction and different classification methods in image processing. Machine learning was applied in the system because of its ability to recognize and compare the classification results in four different methods. The four classifications used to compare the hand gesture recognition from American Sign Language are the Multi-Class SVM classification, Backpropagation Neural Network Backpropagation, K - Nearest Neighbor (K-NN), and Naïve Bayes. The simulation test of the four classification methods that have been carried out obtained success rates of 99.3%, 98.28%, 97.7%, and 95.98%, respectively. So it can be concluded that the classification method using the Multi-Class SVM has the highest success rate in the introduction of American Sign Language, which reaches 99.3%. The whole system is designed and tested using MATLAB as supporting software and data processing.


Sign in / Sign up

Export Citation Format

Share Document