A Static Hand Gesture Recognition for Peruvian Sign Language Using Digital Image Processing and Deep Learning

Author(s):  
Cristian Lazo ◽  
Zaid Sanchez ◽  
Christian del Carpio
Author(s):  
Aniket Wattamwar

Abstract: This research work presents a prototype system that helps to recognize hand gesture to normal people in order to communicate more effectively with the special people. Aforesaid research work focuses on the problem of gesture recognition in real time that sign language used by the community of deaf people. The problem addressed is based on Digital Image Processing using CNN (Convolutional Neural Networks), Skin Detection and Image Segmentation techniques. This system recognizes gestures of ASL (American Sign Language) including the alphabet and a subset of its words. Keywords: gesture recognition, digital image processing, CNN (Convolutional Neural Networks), image segmentation, ASL (American Sign Language), alphabet


2021 ◽  
Vol 1 (1) ◽  
pp. 71-80
Author(s):  
Febri Damatraseta ◽  
Rani Novariany ◽  
Muhammad Adlan Ridhani

BISINDO is one of Indonesian sign language, which do not have many facilities to implement. Because it can cause deaf people have difficulty to live their daily life. Therefore, this research tries to offer an recognition or translation system of the BISINDO alphabet into a text. The system is expected to help deaf people to communicate in two directions. In this study the problems encountered is small datasets. Therefore this research will do the testing of hand gesture recognition, by comparing two model CNN algorithms, that is LeNet-5 and Alexnet. This test will look for which classification technique is better if the dataset conditions in an amount that does not reach 1000 images in each class. After testing, the results found that the CNN technique on the Alexnet architectural model is better to used, this is because when doing the testing process by using still-image and Alexnet model data which has been released in training process, Alexnet model data gives greater prediction results that is equal to 76%. While the LeNet model is only able to predict with the percentage of 19%. When that Alexnet data model used on the system offered, only able to predict correcly by 60%.   Keywords: Sign language, BISINDO, Computer Vision, Hand Gesture Recognition, Skin Segmentation, CIELab, Deep Learning, CNN.


Author(s):  
Sruthy Skaria ◽  
Da Huang ◽  
Akram Al-Hourani ◽  
Robin J. Evans ◽  
Margaret Lech

Sign in / Sign up

Export Citation Format

Share Document