Sign Language Recognition System Simulated for Video Captured with Smart Phone Front Camera

Author(s):  
G. Ananth Rao ◽  
P.V.V. Kishore

<p>This works objective is to bring sign language closer to real time implementation on mobile platforms. A video database of Indian sign language is created with a mobile front camera in selfie mode. This video is processed on a personal computer by constraining the computing power to that of a smart phone with 2GB ram. Pre-filtering, segmentation and feature extraction on video frames creates a sign language feature space. Minimum distance classification of the sign feature space converts signs to text or speech. ASUS smart phone with 5M pixel front camera captures continuous sign videos containing around 240 frames at a frame rate of 30fps. Sobel edge operator’s power is enhanced with morphology and adaptive thresholding giving a near perfect segmentation of hand and head portions. Word matching score (WMS) estimates performance of the proposed method with an average WMS of around 90.58%.</p>

Author(s):  
G. Ananth Rao ◽  
P.V.V. Kishore

<p>This works objective is to bring sign language closer to real time implementation on mobile platforms. A video database of Indian sign language is created with a mobile front camera in selfie mode. This video is processed on a personal computer by constraining the computing power to that of a smart phone with 2GB ram. Pre-filtering, segmentation and feature extraction on video frames creates a sign language feature space. Minimum distance classification of the sign feature space converts signs to text or speech. ASUS smart phone with 5M pixel front camera captures continuous sign videos containing around 240 frames at a frame rate of 30fps. Sobel edge operator’s power is enhanced with morphology and adaptive thresholding giving a near perfect segmentation of hand and head portions. Word matching score (WMS) estimates performance of the proposed method with an average WMS of around 90.58%.</p>


2019 ◽  
Vol 7 (2) ◽  
pp. 43
Author(s):  
MALHOTRA POOJA ◽  
K. MANIAR CHIRAG ◽  
V. SANKPAL NIKHIL ◽  
R. THAKKAR HARDIK ◽  
◽  
...  

Communication is one of the basic requirements for living in the world. Deaf and dumb people convey through Sign Language but normal people have difficulty to understand their language. In order to provide a source of medium between normal and differently abled people, Sign Language Recognition System (SLR) is a solution . American Sign Language (ASL) has attracted many researchers’ attention but Indian Sign Language Recognition (ISLR) is significantly different from ASL due to different phonetic, grammar and hand movement. Designing a system for Indian Sign Language Recognition becomes a difficult task. ISLR system uses Indian Sign Language (ISL) dataset for recognition but suffers from problem of scaling, object orientation and lack of optimal feature set. In this paper to address these issues, Scale-Invariant Feature Transform (SIFT) as a descriptor is used. It extracts the features that train the Feed Forward Back Propagation Neural Network (FFBPNN) and optimize it using Artificial Bee Colony (ABC) according to the fitness function. The dataset has been collected for alphabet from the video by extracting frames and for numbers it has been created manually from deaf and dumb students of NGO “Sarthak”. It has been shown through simulation results that there has been significant improvement in accurately identifying alphabets and numbers with an average accuracy of 99.43%


Sign in / Sign up

Export Citation Format

Share Document