A Vision-Based Approach for Indian Sign Language Recognition

Author(s):  
Karishma Dixit ◽  
Anand Singh Jalal

The sign language is the essential communication method between the deaf and dumb people. In this paper, the authors present a vision based approach which efficiently recognize the signs of Indian Sign Language (ISL) and translate the accurate meaning of those recognized signs. A new feature vector is computed by fusing Hu invariant moment and structural shape descriptor to recognize sign. A multi-class Support Vector Machine (MSVM) is utilized for training and classifying signs of ISL. The performance of the algorithm is illustrated by simulations carried out on a dataset having 720 images. Experimental results demonstrate that the proposed approach can successfully recognize hand gesture with 96% recognition rate.

Author(s):  
Pradip Ramanbhai Patel ◽  
Narendra Patel

Sign Language Recognition (SLR) is immerging as current area of research in the field of machine learning. SLR system recognizes gestures of sign language and converts them into text/voice thus making the communication possible between deaf and ordinary people. Acceptable performance of such system demands invariance of the output with respect to certain transformations of the input. In this paper, we introduce the real time hand gesture recognition system for Indian Sign Language (ISL). In order to obtain very high recognition accuracy, we propose a hybrid feature vector by combining shape oriented features like Fourier Descriptors and region oriented features like Hu Moments & Zernike Moments. Support Vector Machine (SVM) classifier is trained using feature vectors of images of training dataset. During experiment it is found that the proposed hybrid feature vector enhanced the performance of the system by compactly representing the fundamentals of invariance with respect transformation like scaling, translation and rotation. Being invariant with respect to transformation, system is easy to use and achieved a recognition rate of 95.79%.


2020 ◽  
pp. 1-14
Author(s):  
Qiuhong Tian ◽  
Jiaxin Bao ◽  
Huimin Yang ◽  
Yingrou Chen ◽  
Qiaoli Zhuang

BACKGROUND: For a traditional vision-based static sign language recognition (SLR) system, arm segmentation is a major factor restricting the accuracy of SLR. OBJECTIVE: To achieve accurate arm segmentation for different bent arm shapes, we designed a segmentation method for a static SLR system based on image processing and combined it with morphological reconstruction. METHODS: First, skin segmentation was performed using YCbCr color space to extract the skin-like region from a complex background. Then, the area operator and the location of the mass center were used to remove skin-like regions and obtain the valid hand-arm region. Subsequently, the transverse distance was calculated to distinguish different bent arm shapes. The proposed segmentation method then extracted the hand region from different types of hand-arm images. Finally, the geometric features of the spatial domain were extracted and the sign language image was identified using a support vector machine (SVM) model. Experiments were conducted to determine the feasibility of the method and compare its performance with that of neural network and Euclidean distance matching methods. RESULTS: The results demonstrate that the proposed method can effectively segment skin-like regions from complex backgrounds as well as different bent arm shapes, thereby improving the recognition rate of the SLR system.


2019 ◽  
Vol 7 (2) ◽  
pp. 43
Author(s):  
MALHOTRA POOJA ◽  
K. MANIAR CHIRAG ◽  
V. SANKPAL NIKHIL ◽  
R. THAKKAR HARDIK ◽  
◽  
...  

Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


Sign in / Sign up

Export Citation Format

Share Document