scholarly journals Classification of Hand Configurations of the Brazilian Sign Language Using Neural Network Kohonen

2014 ◽  
Vol 16 (2) ◽  
Author(s):  
Kelly Lais Wiggers ◽  
Angelita Maria de Ré ◽  
Andres Jessé Porfírio

Baby Sign Language is used by hearing parents to hearing infants as a preverbal communication which reduce frustration of parents and accelerated learning in babies, increases parent-child bonding, and lets babies communicate vital information, such as if they are hurt or hungry is known as a Baby Sign Language . In the current research work, a study of various existing sign language has been carried out as literature and then after realizing that there is no dataset available for Baby Sign Language, we have created a static dataset for 311 baby signs, which were classified using a MobileNet V1, pretrained Convolution Neural Network [CNN].The focus of the paper is to analyze the effect of Gradient Descent based optimizers, Adam and its variants, Rmsprop optimizers on fine-tuned pretrained CNN model MobileNet V1 that has been trained using customized dataset. The optimizers are used to train and test on MobileNet for 100 epochs on the dataset created for 311 baby Signs. These 10 optimizers Adadelta, Adam, Adamax, SGD, Adagrad, RMSProp were compared based on their processing time.


2020 ◽  
Author(s):  
João Pedro C. Sobrinho ◽  
Lucas Pacheco H. da Silva ◽  
Gabriella Dalpra ◽  
Samuel Basilio

Recognized by law, the Brazilian Sign Language (LIBRAS), is thesecond Brazilian official language and, according to IBGE (BrazilianInstitute of Geography and Statistics), Brazil has a large communityof hearing-impaired people, with approximately nine million ofdeaf people. Besides that, most of the non-deaf community cannotcommunicate or understand this language. Considering that, theuse of LIBRAS’ interpreters becomes extremely necessary in orderto allow a greater inclusion of people with this type of disabilitywith the whole community. However, an alternative solution tothis problem would be to use artificial neural network methods forthe LIBRAS recognition and translation. In this work, a processof LIBRAS’ recognition and translation is presented, using videosas input and a convolutional-recurrent neural network, known asConvLSTM. This type of neural network receives the sequence offrames from the videos and analyzes, frame by frame, if the framebelongs to the video and if the video belongs to a specific class.This analysis is done in two steps: first, the image is analyzed inthe convolutional layer of the network and, after that, it is sent tothe network recurrent layer. In the current version of the implementednetwork, data collection has already been carried out, theconvolutional-recurrent neural network has been trained and it ispossible to recognize when a given LIBRAS’ video represents ornot a specific sentence in this language.


Sign in / Sign up

Export Citation Format

Share Document