scholarly journals Deep Learning Based Indian Sign Language Words Identification System

2021 ◽  
Author(s):  
P. Golda Jeyasheeli ◽  
N. Indumathi

In Indian Population there is about 1 percent of the people are deaf and dumb. Deaf and dumb people use gestures to interact with each other. Ordinary humans fail to grasp the significance of gestures, which makes interaction between deaf and mute people hard. In attempt for ordinary citizens to understand the signs, an automated sign language identification system is proposed. A smart wearable hand device is designed by attaching different sensors to the gloves to perform the gestures. Each gesture has unique sensor values and those values are collected as an excel data. The characteristics of movements are extracted and categorized with the aid of a convolutional neural network (CNN). The data from the test set is identified by the CNN according to the classification. The objective of this system is to bridge the interaction gap between people who are deaf or hard of hearing and the rest of society.

2021 ◽  
Vol 40 ◽  
pp. 03004
Author(s):  
Rachana Patil ◽  
Vivek Patil ◽  
Abhishek Bahuguna ◽  
Gaurav Datkhile

Communicating with the person having hearing disability is always a major challenge. The work presented in paper is an exertion(extension) towards examining the difficulties in classification of characters in Indian Sign Language(ISL). Sign language is not enough for communication of people with hearing ability or people with speech disability. The gestures made by the people with disability gets mixed or disordered for someone who has never learnt this language. Communication should be in both ways. In this paper, we introduce a Sign Language recognition using Indian Sign Language.The user must be able to capture images of hand gestures using a web camera in this analysis, and the system must predict and show the name of the captured image. The captured image undergoes series of processing steps which include various Computer vision techniques such as the conversion to gray-scale, dilation and mask operation. Convolutional Neural Network (CNN) is used to train our model and identify the pictures. Our model has achieved accuracy about 95%


Author(s):  
Uzma Batool ◽  
Mohd Ibrahim Shapiai ◽  
Nordinah Ismail ◽  
Hilman Fauzi ◽  
Syahrizal Salleh

Silicon wafer defect data collected from fabrication facilities is intrinsically imbalanced because of the variable frequencies of defect types. Frequently occurring types will have more influence on the classification predictions if a model gets trained on such skewed data. A fair classifier for such imbalanced data requires a mechanism to deal with type imbalance in order to avoid biased results. This study has proposed a convolutional neural network for wafer map defect classification, employing oversampling as an imbalance addressing technique. To have an equal participation of all classes in the classifier’s training, data augmentation has been employed, generating more samples in minor classes. The proposed deep learning method has been evaluated on a real wafer map defect dataset and its classification results on the test set returned a 97.91% accuracy. The results were compared with another deep learning based auto-encoder model demonstrating the proposed method, a potential approach for silicon wafer defect classification that needs to be investigated further for its robustness.


2020 ◽  
Vol 17 (9) ◽  
pp. 4660-4665
Author(s):  
L. Megalan Leo ◽  
T. Kalpalatha Reddy

In the modern times, Dental caries is one of the most prevalent diseases of the teeth in the whole world. Almost 90% of the people get affected by cavity. Dental caries is the cavity which occurs due to the remnant food and bacteria. Dental Caries are curable and preventable diseases when it is identified at earlier stage. Dentist uses the radiographic examination in addition with visual tactile inspection to identify the caries. Dentist finds difficult to identify the occlusal, pit and fissure caries. It may lead to sever problem if the cavity left untreated and not identified at the earliest stage. Machine learning can be applied to solve this issue by applying the labelled dataset given by the experienced dentist. In this paper, convolutional based deep learning method is applied to identify the cavity presence in the image. 480 Bite viewing radiography images are collected from the Elsevier standard database. All the input images are resized to 128–128 matrixes. In preprocessing, selective median filter is used to reduce the noise in the image. Pre-processed inputs are given to deep learning model where convolutional neural network with Google Net inception v3 architecture algorithm is implemented. ReLu activation function is used with Google Net to identify the caries that provide the dentists with the precise and optimized results about caries and the area affected. Proposed technique achieves 86.7% accuracy on the testing dataset.


Author(s):  
Oyeniran Oluwashina Akinloye ◽  
Oyebode Ebenezer Olukunle

Numerous works have been proposed and implemented in computerization of various human languages, nevertheless, miniscule effort have also been made so as to put Yorùbá Handwritten Character on the map of Optical Character Recognition. This study presents a novel technique in the development of Yorùbá alphabets recognition system through the use of deep learning. The developed model was implemented on Matlab R2018a environment using the developed framework where 10,500 samples of dataset were for training and 2100 samples were used for testing. The training of the developed model was conducted using 30 Epoch, at 164 iteration per epoch while the total iteration is 4920 iterations. Also, the training period was estimated to 11296 minutes 41 seconds. The model yielded the network accuracy of 100% while the accuracy of the test set is 97.97%, with F1 score of 0.9800, Precision of 0.9803 and Recall value of 0.9797.


Author(s):  
Poonam Yerpude

Abstract: Communication is very imperative for daily life. Normal people use verbal language for communication while people with disabilities use sign language for communication. Sign language is a way of communicating by using the hand gestures and parts of the body instead of speaking and listening. As not all people are familiar with sign language, there lies a language barrier. There has been much research in this field to remove this barrier. There are mainly 2 ways in which we can convert the sign language into speech or text to close the gap, i.e. , Sensor based technique,and Image processing. In this paper we will have a look at the Image processing technique, for which we will be using the Convolutional Neural Network (CNN). So, we have built a sign detector, which will recognise the sign numbers from 1 to 10. It can be easily extended to recognise other hand gestures including alphabets (A- Z) and expressions. We are creating this model based on Indian Sign Language(ISL). Keywords: Multi Level Perceptron (MLP), Convolutional Neural Network (CNN), Indian Sign Language(ISL), Region of interest(ROI), Artificial Neural Network(ANN), VGG 16(CNN vision architecture model), SGD(Stochastic Gradient Descent).


Author(s):  
Suchetha N V ◽  
Tejashri P ◽  
Rohini A Sangogi ◽  
Swapna Kochrekar

Sign language is the only way of method to communication for hearing impaired and deaf-dumb peoples. The system will recognize the signs between signers and non-signers, this will give the meaning of sign. The proposed method is helpful for the people who have hearing difficulties and in general who use very simple and effective method is sign language. This system can be used for converting sign language to text using CNN approach. An image capture system is used for sign language conversion. It captures the signs and display on the screen as writing. Results prove that the planned methodology for sign detection is more effective and has high accuracy. Experimental results will acknowledge the signs that the planned system is 80% accuracy.


Sign in / Sign up

Export Citation Format

Share Document