Transformation Invariant Real-time Recognition of Indian Sign Language using Feature Fusion

Author(s):  
Pradip Ramanbhai Patel ◽  
Narendra Patel

Sign Language Recognition (SLR) is immerging as current area of research in the field of machine learning. SLR system recognizes gestures of sign language and converts them into text/voice thus making the communication possible between deaf and ordinary people. Acceptable performance of such system demands invariance of the output with respect to certain transformations of the input. In this paper, we introduce the real time hand gesture recognition system for Indian Sign Language (ISL). In order to obtain very high recognition accuracy, we propose a hybrid feature vector by combining shape oriented features like Fourier Descriptors and region oriented features like Hu Moments & Zernike Moments. Support Vector Machine (SVM) classifier is trained using feature vectors of images of training dataset. During experiment it is found that the proposed hybrid feature vector enhanced the performance of the system by compactly representing the fundamentals of invariance with respect transformation like scaling, translation and rotation. Being invariant with respect to transformation, system is easy to use and achieved a recognition rate of 95.79%.

Author(s):  
Karishma Dixit ◽  
Anand Singh Jalal

The sign language is the essential communication method between the deaf and dumb people. In this paper, the authors present a vision based approach which efficiently recognize the signs of Indian Sign Language (ISL) and translate the accurate meaning of those recognized signs. A new feature vector is computed by fusing Hu invariant moment and structural shape descriptor to recognize sign. A multi-class Support Vector Machine (MSVM) is utilized for training and classifying signs of ISL. The performance of the algorithm is illustrated by simulations carried out on a dataset having 720 images. Experimental results demonstrate that the proposed approach can successfully recognize hand gesture with 96% recognition rate.


2020 ◽  
pp. 1-14
Author(s):  
Qiuhong Tian ◽  
Jiaxin Bao ◽  
Huimin Yang ◽  
Yingrou Chen ◽  
Qiaoli Zhuang

BACKGROUND: For a traditional vision-based static sign language recognition (SLR) system, arm segmentation is a major factor restricting the accuracy of SLR. OBJECTIVE: To achieve accurate arm segmentation for different bent arm shapes, we designed a segmentation method for a static SLR system based on image processing and combined it with morphological reconstruction. METHODS: First, skin segmentation was performed using YCbCr color space to extract the skin-like region from a complex background. Then, the area operator and the location of the mass center were used to remove skin-like regions and obtain the valid hand-arm region. Subsequently, the transverse distance was calculated to distinguish different bent arm shapes. The proposed segmentation method then extracted the hand region from different types of hand-arm images. Finally, the geometric features of the spatial domain were extracted and the sign language image was identified using a support vector machine (SVM) model. Experiments were conducted to determine the feasibility of the method and compare its performance with that of neural network and Euclidean distance matching methods. RESULTS: The results demonstrate that the proposed method can effectively segment skin-like regions from complex backgrounds as well as different bent arm shapes, thereby improving the recognition rate of the SLR system.


2021 ◽  
Vol 9 (1) ◽  
pp. 182-203
Author(s):  
Muthu Mariappan H ◽  
Dr Gomathi V

Dynamic hand gesture recognition is a challenging task of Human-Computer Interaction (HCI) and Computer Vision. The potential application areas of gesture recognition include sign language translation, video gaming, video surveillance, robotics, and gesture-controlled home appliances. In the proposed research, gesture recognition is applied to recognize sign language words from real-time videos. Classifying the actions from video sequences requires both spatial and temporal features. The proposed system handles the former by the Convolutional Neural Network (CNN), which is the core of several computer vision solutions and the latter by the Recurrent Neural Network (RNN), which is more efficient in handling the sequences of movements. Thus, the real-time Indian sign language (ISL) recognition system is developed using the hybrid CNN-RNN architecture. The system is trained with the proposed CasTalk-ISL dataset. The ultimate purpose of the presented research is to deploy a real-time sign language translator to break the hurdles present in the communication between hearing-impaired people and normal people. The developed system achieves 95.99% top-1 accuracy and 99.46% top-3 accuracy on the test dataset. The obtained results outperform the existing approaches using various deep models on different datasets.


The aim is to present a real time system for hand gesture recognition on the basis of detection of some meaningful shape based feature like orientation, center of mass, status of fingers in term of raised or folded fingers of hand and their respective location in image. Hand gesture Recognition System has various real time applications in natural, innovative, user friendly way of how to interact with the computer which has more facilities that are familiar to us. Gesture recognition has a wide area of application including Human machine interaction, sign language, game technology robotics etc are some of the areas where Gesture recognition can be applied. More specifically hand gesture is used as a signal or input means given to the computer especially by disabled person. Being an interesting part of the human and computer interaction hand gesture recognition is needed for real life application, but complex of structures presents in human hand has a lot of challenges for being tracked and extracted. Making use of computer vision algorithms and gesture recognition techniques will result in developing low-cost interface devices using hand gestures for interacting with objects in virtual environment. SVM (support vector machine) and efficient feature extraction technique is presented for hand gesture recognition. This method deals with the dynamic aspects of hand gesture recognition system.


2019 ◽  
Vol 8 (2) ◽  
pp. 6326-6333

Indian sign language is communicating language among deaf and dumb people of India. Hand gestures are broadly used as communication gestures among various forms of gesture. The real time classification of different signs is a challenging task due to the variation in shape and position of hands as well as due to the variation in the background which varies from person to person. There seems to be no availability of datasets resembling to Indian signs which poses a problem to the researcher. To address this problem, we design our own dataset which is formed by incorporating 1000 signs for the sign digits from 1 to 10 from 100 different people with varying backgrounds conditions by changing colour, and light illumination situations. The dataset comprises of the signs from left handed as well as right handed people. Feature extraction methodologies are studied and applied to recognition of Sign language. This paper focuses on deep learning CNN (convolution neural network) approach with pretrained model Alexnet for calculation of feature vector. Multiple SVM (Support Vector Machine) is applied to classify Indian sign language in real time surroundings. This paper also shows the comparative analysis between Deep learning feature extraction method with histogram of gradient, bag of feature and Speed up robust feature extraction method. The experimental results shown that Deep learning feature extraction using pretrained Alexnet model give accuracy of around 85% and above for the recognition of signed digit with the use of 60% training set and 40% testing set.


2019 ◽  
Vol 7 (2) ◽  
pp. 43
Author(s):  
MALHOTRA POOJA ◽  
K. MANIAR CHIRAG ◽  
V. SANKPAL NIKHIL ◽  
R. THAKKAR HARDIK ◽  
◽  
...  

Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


Sign in / Sign up

Export Citation Format

Share Document