Shape Based Continuous Real Time Hand Gesture Recognition System of American Sign Language Using KNN Classifier

2019 ◽  
Author(s):  
Shivashankara S ◽  
Srinath S
2021 ◽  
Author(s):  
Saliya S Shaikh ◽  
Akram A Patel ◽  
Pravadha Deshmukh Pawar ◽  
Rubana P Shaikh

Too many research has been done in the field of Human Computer Interaction (HCI). One of the system called Hand Gesture Recognition (HGR) gives solution to build the HCI systems. Now a days, computer is used as a interpreter between humans. The proposed system is used to recognize the real time static hand gesture of Indian sign language number system zero to nine. In this paper we propose a system for hand gesture recognition which is simple and fast. Based on the proposed algorithm, this system can automatically convert the input hand gesture into the text and audio. The system first capture the image of hand gesture shown by user using a simple webcam then using our proposed algorithm it recognize the gesture. This system can use for real time application due to the use of simple logic condition applied to recognize the gesture. The proposed system is size invariant and implemented using OpenCV.


Author(s):  
Mohit Panwar ◽  
Rohit Pandey ◽  
Rohan Singla ◽  
Kavita Saxena

Every day we see many people, who are facing illness like deaf, dumb etc. There are not as many technologies which help them to interact with each other. They face difficulty in interacting with others. Sign language is used by deaf and hard hearing people to exchange information between their own community and with other people. Computer recognition of sign language deals from sign gesture acquisition and continues till text/speech generation. Sign gestures can be classified as static and dynamic. However static gesture recognition is simpler than dynamic gesture recognition but both recognition systems are important to the human community. The ASL American sign language recognition steps are described in this survey. There are not as many technologies which help them to interact with each other. They face difficulty in interacting with others. Image classification and machine learning can be used to help computers recognize sign language, which could then be interpreted by other people. Earlier we have Glove-based method in which the person has to wear a hardware glove, while the hand movements are getting captured. It seems a bit uncomfortable for practical use. Here we use visual based method. Convolutional neural networks and mobile ssd model have been employed in this paper to recognize sign language gestures. Preprocessing was performed on the images, which then served as the cleaned input. Tensor flow is used for training of images. A system will be developed which serves as a tool for sign language detection. Tensor flow is used for training of images. Keywords: ASL recognition system, convolutional neural network (CNNs), classification, real time, tensor flow


Hearing impaired individuals use sign languages to communicate with others within the community. Because of the wide spread use of this language, hard-of-hearing individuals can easily understand it but it is not known by a lot of normal people. In this paper a hand gesture recognition system has been developed to overcome this problem, for those who don't recognize sign language to communicate simply with hard-of-hearing individuals. In this paper a computer vision-based system is designed to detect sign Language. Datasets used in this paper are binary images. These images are given to the convolution neural network (CNN). This model extracts the features of the image and classifies the images, and it recognises the gestures. The gestures used in this paper are of American Sign Language. In real time system the images are converted to binary images using Hue, Saturation, and Value (HSV) colour model. In this model 87.5% of data is used for training and 12.5% of data is used for testing and the accuracy obtained with this model is 97%.


Sign in / Sign up

Export Citation Format

Share Document