scholarly journals A Novel Communication System For Deaf And Dumb People using gesture

2020 ◽  
Vol 32 ◽  
pp. 02003
Author(s):  
Pritesh Ambavane ◽  
Rahul Karjavkar ◽  
Hemant Pathare ◽  
Shubham Relekar ◽  
Bhavana Alte ◽  
...  

Human Beings know each other and contact with themselves through thoughts and ideas.The best way to present our idea is through speech. Some people don’t have the power of speech; the only way they communicate with others is through sign language. Now a days technology has reduced the gap through systems which can be used to change the sign language used by these people to speech. Sign language recognition (SLR) and gesture-based control are two major applications used for hand gesture recognition technologies. On the other side the controller converts the sign language in to the text and speech which gets converted with the help of text to speech conversion and analog to digital conversion. A Dumb person throughout the world uses sign language for the communication.The best way to present our idea is through speech. Some people don’t have the power of speech; the only way they communicate with others is through sign language. Now a days technology has reduced the gap through systems which can be used to change the sign language used by these people to speech. Sign language recognition (SLR) and gesture-based control are two major applications used for hand gesture recognition technologies. On the other side the controller converts the sign language in to the text and speech which gets converted with the help of text to speech conversion and analog to digital conversion. A Dumb person throughout the world uses sign language for the communication.

2019 ◽  
Vol 8 (3) ◽  
pp. 2128-2137

There are nearly 15 million people around the world who have difficulty in speaking or communicating. Their only way of communication is through sign language. Hand gesture is one of the methods used in sign language for non-verbal communication. It is most commonly used by deaf & dumb people who have hearing or speech problems to communicate among themselves or with normal people. There are many recognized sign language standards that have been defined such as ASL(American Sign Language), IPSL(Indo Pakistan Sign Language), etc., which define what sign means what. ASL is the most widely used sign language by the deaf and dumb community. The deaf and dumb use sign language to communicate among themselves with the knowledge of the standard sign language. But they can’t communicate with the rest of the world as most of the people are unaware of the existence and the usage of the sign language. This method aims to remove this communication barrier between the disabled and the rest of the world by recognizing and translating the hand gestures and convert it into speech


Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


Author(s):  
Hezhen Hu ◽  
Wengang Zhou ◽  
Junfu Pu ◽  
Houqiang Li

Sign language recognition (SLR) is a challenging problem, involving complex manual features (i.e., hand gestures) and fine-grained non-manual features (NMFs) (i.e., facial expression, mouth shapes, etc .). Although manual features are dominant, non-manual features also play an important role in the expression of a sign word. Specifically, many sign words convey different meanings due to non-manual features, even though they share the same hand gestures. This ambiguity introduces great challenges in the recognition of sign words. To tackle the above issue, we propose a simple yet effective architecture called Global-Local Enhancement Network (GLE-Net), including two mutually promoted streams toward different crucial aspects of SLR. Of the two streams, one captures the global contextual relationship, while the other stream captures the discriminative fine-grained cues. Moreover, due to the lack of datasets explicitly focusing on this kind of feature, we introduce the first non-manual-feature-aware isolated Chinese sign language dataset (NMFs-CSL) with a total vocabulary size of 1,067 sign words in daily life. Extensive experiments on NMFs-CSL and SLR500 datasets demonstrate the effectiveness of our method.


2021 ◽  
Vol 5 (2 (113)) ◽  
pp. 44-54
Author(s):  
Chingiz Kenshimov ◽  
Samat Mukhanov ◽  
Timur Merembayev ◽  
Didar Yedilkhan

For people with disabilities, sign language is the most important means of communication. Therefore, more and more authors of various papers and scientists around the world are proposing solutions to use intelligent hand gesture recognition systems. Such a system is aimed not only for those who wish to understand a sign language, but also speak using gesture recognition software. In this paper, a new benchmark dataset for Kazakh fingerspelling, able to train deep neural networks, is introduced. The dataset contains more than 10122 gesture samples for 42 alphabets. The alphabet has its own peculiarities as some characters are shown in motion, which may influence sign recognition. Research and analysis of convolutional neural networks, comparison, testing, results and analysis of LeNet, AlexNet, ResNet and EffectiveNet – EfficientNetB7 methods are described in the paper. EffectiveNet architecture is state-of-the-art (SOTA) and is supposed to be a new one compared to other architectures under consideration. On this dataset, we showed that the LeNet and EffectiveNet networks outperform other competing algorithms. Moreover, EffectiveNet can achieve state-of-the-art performance on nother hand gesture datasets. The architecture and operation principle of these algorithms reflect the effectiveness of their application in sign language recognition. The evaluation of the CNN model score is conducted by using the accuracy and penalty matrix. During training epochs, LeNet and EffectiveNet showed better results: accuracy and loss function had similar and close trends. The results of EffectiveNet were explained by the tools of the SHapley Additive exPlanations (SHAP) framework. SHAP explored the model to detect complex relationships between features in the images. Focusing on the SHAP tool may help to further improve the accuracy of the model


2021 ◽  
Author(s):  
Qing Han ◽  
Zhanlu Huangfu ◽  
Weidong Min ◽  
Yanqiu Liao

Abstract Most existing deep learning-based dynamic sign language recognition methods directly use either the video sequence based on RGB information, or whole sequences instead of only the video sequence that represents the change of gesture. These characteristics lead to inaccurate extraction of hand gesture features and failure to achieve good recognition accuracy for complex gestures. In order to solve these problems, this paper proposes a new method of dynamic hand gesture recognition for key skeleton information, which combines residual convolutional neural network and long short-term memory recurrent network, which is called KLSTM-3D residual network (K3D ResNet). In K3DResNet, the spatiotemporal complexity of network computation is reduced by extracting the representative skeleton frame of gesture change. Then, the spatiotemporal features are extracted from the skeleton keyframe sequence, and the intermediate score corresponding to each action in the video sequence is established after the feature analysis. Finally, the classification of video sequences can accurately identify sign language. Experiments were performed on datasets DHG14/28 and SHREC’17 Track. The accuracy of verification on dataset DEVISIGN D reached 88.6%. In addition, the accuracy of the combination of RGB and skeleton information reached 93.2%.


Author(s):  
Prof. Namrata Ghuse

Gesture-based communication Recognition through innovation has been ignored idea even though an enormous local area can profit from it. There are more than 3% total population of the world who even can't speak or hear properly. Using hand Gesture-based communication, Every especially impaired people to communicate with each other or rest of world population. It is a way of correspondence for the rest of the world population who are neither speaking and hearing incompetency. Normal people even don't become intimate or close with sign language based communications. It's a reason become a gap between the especially impaired people & ordinary person. The previous systems of the project used to involve the concepts of image generation and emoji symbols. But the previous frameworks of a project are not affordable and not portable for the impaired person.The Main propaganda of a project has always been to interpret Indian Sign Language Standards and American Sign Language Standards and Convert gestures into voice and text, also assist the impaired person can interact with the other person from the remote location. This hand smart glove has been made with the set up with Gyroscope, Flex Sensor, ESP32 Microcontrollers/Micro bit, Accelerometer,25 LED Matrix Actuators/Output &, flex sensor, vibrator etc.


2019 ◽  
Vol 161 ◽  
pp. 74-81
Author(s):  
Dolly Indra ◽  
Purnawansyah ◽  
Sarifuddin Madenda ◽  
Eri Prasetyo Wibowo

Author(s):  
Kamal Preet Kour ◽  
Lini Mathew

One of the major drawback of our society is the barrier that is created between disabled or handicapped persons and the normal person. Communication is the only medium by which we can share our thoughts or convey the message but for a person with disability (deaf and dumb) faces difficulty in communication with normal person. For many deaf and dumb people , sign language is the basic means of communication. Sign language recognition (SLR) aims to interpret sign languages automatically by a computer in order to help the deaf communicate with hearing society conveniently.  Our aim is to design a system  to help the person who trained the hearing impaired to communicate with the rest of the world using sign language or hand gesture recognition techniques. In this system, feature detection and feature extraction of hand gesture is done with the help of SURF algorithm using image processing. All this work is done using MATLAB software. With the help of this algorithm, a person can easily trained a deaf and dumb.


Sign in / Sign up

Export Citation Format

Share Document