Recognized by law, the Brazilian Sign Language (LIBRAS), is thesecond Brazilian official language and, according to IBGE (BrazilianInstitute of Geography and Statistics), Brazil has a large communityof hearing-impaired people, with approximately nine million ofdeaf people. Besides that, most of the non-deaf community cannotcommunicate or understand this language. Considering that, theuse of LIBRAS’ interpreters becomes extremely necessary in orderto allow a greater inclusion of people with this type of disabilitywith the whole community. However, an alternative solution tothis problem would be to use artificial neural network methods forthe LIBRAS recognition and translation. In this work, a processof LIBRAS’ recognition and translation is presented, using videosas input and a convolutional-recurrent neural network, known asConvLSTM. This type of neural network receives the sequence offrames from the videos and analyzes, frame by frame, if the framebelongs to the video and if the video belongs to a specific class.This analysis is done in two steps: first, the image is analyzed inthe convolutional layer of the network and, after that, it is sent tothe network recurrent layer. In the current version of the implementednetwork, data collection has already been carried out, theconvolutional-recurrent neural network has been trained and it ispossible to recognize when a given LIBRAS’ video represents ornot a specific sentence in this language.