American Sign Language Static Gesture Recognition using Deep Learning and Computer Vision

Author(s):  
Sai Nikhilesh Reddy Karna ◽  
Jai Surya Kode ◽  
Suneel Nadipalli ◽  
Sudha Yadav
Author(s):  
Dhanashree Shyam Bendarkar ◽  
Pratiksha Appasaheb Somase ◽  
Preety Kalyansingh Rebari ◽  
Renuka Ramkrishna Paturkar ◽  
Arjumand Masood Khan

Individuals with hearing hindrance utilize gesture based communication to exchange their thoughts. Generally hand movements are used by them to communicate among themselves. But there are certain limitations when they communicate with other people who cannot understand these hand movements. There is a need to have a mechanism that can act as a translator between these people to communicate. It would be easier for these people to interact if there exists direct infrastructure that is able to convert signs to text and voice messages. As of late, numerous such frameworks for gesture based communication acknowledgment have been developed. But most of them are made either for static gesture recognition or dynamic gesture recognition. As sentences are generated using combinations of static and dynamic gestures, it would be simpler for hearing debilitated individuals if such computerized frameworks can detect both the static and dynamic motions together. We have proposed a design and architecture of American Sign Language (ASL) recognition with convolutional neural networks (CNN). This paper utilizes a pretrained VGG-16 architecture for static gesture recognition and for dynamic gesture recognition, spatiotemporal features were learnt with the complex architecture, called deep learning. It contains a bidirectional convolutional Long Short Term Memory network (ConvLSTM) and 3D convolutional neural network (3DCNN) and this architecture is responsible to extract  2D spatio temporal features.


Author(s):  
Mohit Panwar ◽  
Rohit Pandey ◽  
Rohan Singla ◽  
Kavita Saxena

Every day we see many people, who are facing illness like deaf, dumb etc. There are not as many technologies which help them to interact with each other. They face difficulty in interacting with others. Sign language is used by deaf and hard hearing people to exchange information between their own community and with other people. Computer recognition of sign language deals from sign gesture acquisition and continues till text/speech generation. Sign gestures can be classified as static and dynamic. However static gesture recognition is simpler than dynamic gesture recognition but both recognition systems are important to the human community. The ASL American sign language recognition steps are described in this survey. There are not as many technologies which help them to interact with each other. They face difficulty in interacting with others. Image classification and machine learning can be used to help computers recognize sign language, which could then be interpreted by other people. Earlier we have Glove-based method in which the person has to wear a hardware glove, while the hand movements are getting captured. It seems a bit uncomfortable for practical use. Here we use visual based method. Convolutional neural networks and mobile ssd model have been employed in this paper to recognize sign language gestures. Preprocessing was performed on the images, which then served as the cleaned input. Tensor flow is used for training of images. A system will be developed which serves as a tool for sign language detection. Tensor flow is used for training of images. Keywords: ASL recognition system, convolutional neural network (CNNs), classification, real time, tensor flow


2020 ◽  
Vol 38 (6A) ◽  
pp. 926-937
Author(s):  
Abdulwahab A. Abdulhussein ◽  
Firas A. Raheem

An American Sign Language (ASL) is a complex language. It is depending on the special gesture stander of marks. These marks are represented by hands with assistance by facial expression and body posture. ASL is the main communication language of deaf and people who have hard hearing from North America and other parts of the world. In this paper, Gesture recognition is proposed of static ASL using Deep Learning. The contribution consists of two solutions to the problem. The first one is resized with Bicubic static ASL binary images. Besides that, good recognition results in of detection the boundary hand using the Robert edge detection method. The second solution is to classify the 24 alphabets static characters of ASL using Convolution Neural Network (CNN) and Deep Learning. The classification accuracy equals to 99.3 % and the error of loss function is 0.0002. According to 36 minutes with 15 seconds of elapsed time result and 100 iterations. The training is fast and gives the very good results, in comparison with other related works of CNN, SVM, and ANN for training.


Author(s):  
Muhammad Saad Amin ◽  
Muhammad Talha Amin ◽  
Muhammad Yasir Latif ◽  
Ali Asghar Jathol ◽  
Nisar Ahmed ◽  
...  

Author(s):  
Sarvesh Joglekar ◽  
Hrishikesh Sawant ◽  
Aayush Jain ◽  
Priya Dhadda ◽  
Pankaj Sonawane

Sign in / Sign up

Export Citation Format

Share Document