scholarly journals Sign Language Translation using Hand Gesture Detection

This Paper demonstrate a module on a sign to speech (voice) converter for auto Conversion of American sign language (ASL) to English Speech and text. It minimizes the gap of communication between speech impaired and other humans. It could be used to understand a speech impaired person’s thoughts or views which he communicates with others through its ASL gestures but failing to communicate due to a large communication gap between them. It also work as a translator for person who do not understand the sign language and allows the communication in the natural way of speaking. The proposed module is an interactive application module developed using Python and its Advanced Libraries. This module uses inbuilt camera of the system to get the images and perform analysis on those images to predict the meaning of that gesture and provide the output as text on screen and speech through speaker of the system that makes this module very much cost effective. This module recognizes one handed ASL gestures of alphabets (A-Z) with highly consistent, fairly high precision and accuracy.

Author(s):  
Krutika S. Kale ◽  
Milind B. Waghmare

Speech impairment limits a person's capacity to speak and communicate with others, forcing them to adopt other communication methods such as sign language. Sign language is not that widely used technique by the deaf. To solve this problem, we developed a powerful hand gesture detection tool that can easily monitor both dynamic and static hand motions with ease. Gesture recognition aims to translate sign language into voice or text for individuals who have a rudimentary comprehension of that, which will be a tremendous help in communication between deaf-mute and hearing people. We describe the design and implementation of an American Sign Language (ASL) fingerspelling translator based on spatial feature identification using a convolutional neural network.


2012 ◽  
Vol 241-244 ◽  
pp. 3059-3062
Author(s):  
Ling Hua Li ◽  
Shou Fang Mi

This paper describes the research on the method of orthogonal three-direction chain code (3OT) used in corner detection of hand gesture. The research is discussed from two aspects: the main idea of 3OT and the experiments on corner detection for hand gesture images of 26 letters in American Sign Language by using 3OT. Experiment results show that the method has well performance with exact corner detection rate and least false corner’s number.


The growth of technology has influenced development in various fields. Technology has helped people achieve their dreams over the past years. One such field that technology involves is aiding the hearing and speech impaired people. The obstruction between common individuals and individuals with hearing and language incapacities can be resolved by using the current technology to develop an environment such that the aforementioned easily communicate among one and other. ASL Interpreter aims to facilitate communication among the hearing and speech impaired individuals. This project mainly focuses on the development of software that can convert American Sign Language to Communicative English Language and vice-versa. This is accomplished via Image-Processing. The latter is a system that does a few activities on a picture, to acquire an improved picture or to extricate some valuable data from it. Image processing in this project is done by using MATLAB, software by MathWorks. The latter is programmed in a way that it captures the live image of the hand gesture. The captured gestures are put under the spotlight by being distinctively colored in contrast with the black background. The contrasted hand gesture will be delivered in the database as a binary equivalent of the location of each pixel and the interpreter would now link the binary value to its equivalent translation delivered in the database. This database shall be integrated into the mainframe image processing interface. The Image Processing toolbox, which is an inbuilt toolkit provided by MATLAB is used in the development of the software and Histogramic equivalents of the images are brought to the database and the extracted image will be converted to a histogram using the ‘imhist()’ function and would be compared with the same. The concluding phase of the project i.e. translation of speech to sign language is designed by matching the letter equivalent to the hand gesture in the database and displaying the result as images. The software will use a webcam to capture the hand gesture made by the user. This venture plans to facilitate the way toward learning gesture-based communication and supports hearing-impaired people to converse without trouble.


Author(s):  
JanFizza Bukhari ◽  
Maryam Rehman ◽  
Saman Ishtiaq Malik ◽  
Awais M. Kamboh ◽  
Ahmad Salman

Sign in / Sign up

Export Citation Format

Share Document