scholarly journals Real-Time Sign Language Detection and Recognition

Author(s):  
Sarthak Sharma

Abstract: Sign language is one of the oldest and most natural form of language for communication, but since most people do not know sign language and interpreters are very difficult to come by we have come up with a real time method using neural networks for fingerspelling based American sign language. In our method, the hand is first passed through a filter and after the filter is applied the hand is passed through a classifier which predicts the class of the hand gestures.

Author(s):  
Narayana Darapaneni ◽  
Prasad Gandole ◽  
Sureshkumar Ramasamy ◽  
Yashraj Tambe ◽  
Anshuman Dwivedi ◽  
...  

2018 ◽  
Vol 21 (6) ◽  
pp. e12672 ◽  
Author(s):  
Kyle MacDonald ◽  
Todd LaMarr ◽  
David Corina ◽  
Virginia A. Marchman ◽  
Anne Fernald

2019 ◽  
Vol 10 (3) ◽  
pp. 60-73 ◽  
Author(s):  
Ravinder Ahuja ◽  
Daksh Jain ◽  
Deepanshu Sachdeva ◽  
Archit Garg ◽  
Chirag Rajput

Communicating through hand gestures with each other is simply called the language of signs. It is an acceptable language for communication among deaf and dumb people in this society. The society of the deaf and dumb admits a lot of obstacles in day to day life in communicating with their acquaintances. The most recent study done by the World Health Organization reports that very large section (around 360 million folks) present in the world have hearing loss, i.e. 5.3% of the earth's total population. This gives us a need for the invention of an automated system which converts hand gestures into meaningful words and sentences. The Convolutional Neural Network (CNN) is used on 24 hand signals of American Sign Language in order to enhance the ease of communication. OpenCV was used in order to follow up on further execution techniques like image preprocessing. The results demonstrated that CNN has an accuracy of 99.7% utilizing the database found on kaggle.com.


Sign in / Sign up

Export Citation Format

Share Document