scholarly journals Sign Language Recognition System for Disabled People

The aim is to present a real time system for hand gesture recognition on the basis of detection of some meaningful shape based feature like orientation, center of mass, status of fingers in term of raised or folded fingers of hand and their respective location in image. Hand gesture Recognition System has various real time applications in natural, innovative, user friendly way of how to interact with the computer which has more facilities that are familiar to us. Gesture recognition has a wide area of application including Human machine interaction, sign language, game technology robotics etc are some of the areas where Gesture recognition can be applied. More specifically hand gesture is used as a signal or input means given to the computer especially by disabled person. Being an interesting part of the human and computer interaction hand gesture recognition is needed for real life application, but complex of structures presents in human hand has a lot of challenges for being tracked and extracted. Making use of computer vision algorithms and gesture recognition techniques will result in developing low-cost interface devices using hand gestures for interacting with objects in virtual environment. SVM (support vector machine) and efficient feature extraction technique is presented for hand gesture recognition. This method deals with the dynamic aspects of hand gesture recognition system.

Author(s):  
Srinivas K ◽  
Manoj Kumar Rajagopal

To recognize different hand gestures and achieve efficient classification to understand static and dynamic hand movements used for communications.Static and dynamic hand movements are first captured using gesture recognition devices including Kinect device, hand movement sensors, connecting electrodes, and accelerometers. These gestures are processed using hand gesture recognition algorithms such as multivariate fuzzy decision tree, hidden Markov models (HMM), dynamic time warping framework, latent regression forest, support vector machine, and surface electromyogram. Hand movements made by both single and double hands are captured by gesture capture devices with proper illumination conditions. These captured gestures are processed for occlusions and fingers close interactions for identification of right gesture and to classify the gesture and ignore the intermittent gestures. Real-time hand gestures recognition needs robust algorithms like HMM to detect only the intended gesture. Classified gestures are then compared for the effectiveness with training and tested standard datasets like sign language alphabets and KTH datasets. Hand gesture recognition plays a very important role in some of the applications such as sign language recognition, robotics, television control, rehabilitation, and music orchestration.


Author(s):  
Mohit Panwar ◽  
Rohit Pandey ◽  
Rohan Singla ◽  
Kavita Saxena

Every day we see many people, who are facing illness like deaf, dumb etc. There are not as many technologies which help them to interact with each other. They face difficulty in interacting with others. Sign language is used by deaf and hard hearing people to exchange information between their own community and with other people. Computer recognition of sign language deals from sign gesture acquisition and continues till text/speech generation. Sign gestures can be classified as static and dynamic. However static gesture recognition is simpler than dynamic gesture recognition but both recognition systems are important to the human community. The ASL American sign language recognition steps are described in this survey. There are not as many technologies which help them to interact with each other. They face difficulty in interacting with others. Image classification and machine learning can be used to help computers recognize sign language, which could then be interpreted by other people. Earlier we have Glove-based method in which the person has to wear a hardware glove, while the hand movements are getting captured. It seems a bit uncomfortable for practical use. Here we use visual based method. Convolutional neural networks and mobile ssd model have been employed in this paper to recognize sign language gestures. Preprocessing was performed on the images, which then served as the cleaned input. Tensor flow is used for training of images. A system will be developed which serves as a tool for sign language detection. Tensor flow is used for training of images. Keywords: ASL recognition system, convolutional neural network (CNNs), classification, real time, tensor flow


Author(s):  
Divya K V ◽  
Harish E ◽  
Nikhil Jain D ◽  
Nirdesh Reddy B

Sign language recognition (SLR) aims to interpret sign languages automatically by a computer in order to help the deaf communicate with hearing society conveniently. Our aim is to design a system to help the person who trained the hearing impaired to communicate with the rest of the world using sign language or hand gesture recognition techniques. In this system, feature detection and feature extraction of hand gesture is done with the help of Support Vector Machine (SVM), K-Neighbors-Classifier, Logistic-Regression, MLP-Classifier, Naive Bayes, Random-Forest-Classifier algorithms are using image processing.


2013 ◽  
Vol 4 (1) ◽  
pp. 1
Author(s):  
Ednaldo Brigante Pizzolato ◽  
Mauro dos Santos Anjo ◽  
Sebastian Feuerstack

Sign languages are the natural way Deafs use to communicate with other people. They have their own formal semantic definitions and syntactic rules and are composed by a large set of gestures involving hands and head. Automatic recognition of sign languages (ARSL) tries to recognize the signs and translate them into a written language. ARSL is a challenging task as it involves background segmentation, hands and head posture modeling, recognition and tracking, temporal analysis and syntactic and semantic interpretation. Moreover, when real-time requirements are considered, this task becomes even more challenging. In this paper, we present a study of real time requirements of automatic sign language recognition of small sets of static and dynamic gestures of the Brazilian Sign Language (LIBRAS). For the task of static gesture recognition, we implemented a system that is able to work on small sub-sets of the alphabet - like A,E,I,O,U and B,C,F,L,V - reaching very high recognition rates. For the task of dynamic gesture recognition, we tested our system over a small set of LIBRAS words and collected the execution times. The aim was to gather knowledge regarding execution time of all the recognition processes (like segmentation, analysis and recognition itself) to evaluate the feasibility of building a real-time system to recognize small sets of both static and dynamic gestures. Our findings indicate that the bottleneck of our current architecture is the recognition phase.


Author(s):  
Julakanti Likhitha Reddy ◽  
Bhavya Mallela ◽  
Lakshmi Lavanya Bannaravuri ◽  
Kotha Mohan Krishna

To interact with world using expressions or body movements is comparatively effective than just speaking. Gesture recognition can be a better way to convey meaningful information. Communication through gestures has been widely used by humans to express their thoughts and feelings. Gestures can be performed with any body part like head, face, hands and arms but most predominantly hand is use to perform gestures, Hand Gesture Recognition have been widely accepted for numerous applications such as human computer interactions, robotics, sign language recognition, etc. This paper focuses on bare hand gesture recognition system by proposing a scheme using a database-driven hand gesture recognition based upon skin color model approach and thresholding approach along with an effective template matching with can be effectively used for human robotics applications and similar other applications .Initially, hand region is segmented by applying skin color model in YCbCr color space. Y represents the luminance and Cb and Cr represents chrominance. In the next stage Otsu thresholding is applied to separate foreground and background. Finally, template based matching technique is developed using Principal Component Analysis (PCA), k-nearest neighbour (KNN) and Support Vector Machine (SVM) for recognition. KNN is used for statistical estimation and pattern recognition. SVM can be used for classification or regression problems.


Author(s):  
Ananya Choudhury ◽  
Anjan Kumar Talukdar ◽  
Kandarpa Kumar Sarma

In the present scenario, vision based hand gesture recognition has become a highly emerging research area for the purpose of human computer interaction. Such recognition systems are deployed to serve as a replacement for the commonly used human-machine interactive devices such as keyboard, mouse, joystick etc. in real world situations. The major challenges faced by a vision based hand gesture recognition system include recognition in complex background, in dynamic background, in presence of multiple gestures in the background, under variable lighting condition, under different viewpoints etc. In the context of sign language recognition, which is a highly demanding application of hand gesture recognition system, coarticulation detection is a challenging task. The main objective of this chapter is to provide a general overview of vision based hand gesture recognition system as well as to bring into light some of the research works that have been done in this field.


2021 ◽  
Author(s):  
Saliya S Shaikh ◽  
Akram A Patel ◽  
Pravadha Deshmukh Pawar ◽  
Rubana P Shaikh

Too many research has been done in the field of Human Computer Interaction (HCI). One of the system called Hand Gesture Recognition (HGR) gives solution to build the HCI systems. Now a days, computer is used as a interpreter between humans. The proposed system is used to recognize the real time static hand gesture of Indian sign language number system zero to nine. In this paper we propose a system for hand gesture recognition which is simple and fast. Based on the proposed algorithm, this system can automatically convert the input hand gesture into the text and audio. The system first capture the image of hand gesture shown by user using a simple webcam then using our proposed algorithm it recognize the gesture. This system can use for real time application due to the use of simple logic condition applied to recognize the gesture. The proposed system is size invariant and implemented using OpenCV.


Sign in / Sign up

Export Citation Format

Share Document