A Novel Framework for Sign Language to Text Conversion using Convolutional Neural Network

Author(s):  
Suchetha N V ◽  
Tejashri P ◽  
Rohini A Sangogi ◽  
Swapna Kochrekar

Sign language is the only way of method to communication for hearing impaired and deaf-dumb peoples. The system will recognize the signs between signers and non-signers, this will give the meaning of sign. The proposed method is helpful for the people who have hearing difficulties and in general who use very simple and effective method is sign language. This system can be used for converting sign language to text using CNN approach. An image capture system is used for sign language conversion. It captures the signs and display on the screen as writing. Results prove that the planned methodology for sign detection is more effective and has high accuracy. Experimental results will acknowledge the signs that the planned system is 80% accuracy.

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Longzhi Zhang ◽  
Dongmei Wu

Grasp detection based on convolutional neural network has gained some achievements. However, overfitting of multilayer convolutional neural network still exists and leads to poor detection precision. To acquire high detection accuracy, a single target grasp detection network that generalizes the fitting of angle and position, based on the convolution neural network, is put forward here. The proposed network regards the image as input and grasping parameters including angle and position as output, with the detection manner of end-to-end. Particularly, preprocessing dataset is to achieve the full coverage to input of model and transfer learning is to avoid overfitting of network. Importantly, a series of experimental results indicate that, for single object grasping, our network has good detection results and high accuracy, which proves that the proposed network has strong generalization in direction and category.


2021 ◽  
Author(s):  
P. Golda Jeyasheeli ◽  
N. Indumathi

In Indian Population there is about 1 percent of the people are deaf and dumb. Deaf and dumb people use gestures to interact with each other. Ordinary humans fail to grasp the significance of gestures, which makes interaction between deaf and mute people hard. In attempt for ordinary citizens to understand the signs, an automated sign language identification system is proposed. A smart wearable hand device is designed by attaching different sensors to the gloves to perform the gestures. Each gesture has unique sensor values and those values are collected as an excel data. The characteristics of movements are extracted and categorized with the aid of a convolutional neural network (CNN). The data from the test set is identified by the CNN according to the classification. The objective of this system is to bridge the interaction gap between people who are deaf or hard of hearing and the rest of society.


TEM Journal ◽  
2020 ◽  
pp. 937-943
Author(s):  
Rasha Amer Kadhim ◽  
Muntadher Khamees

In this paper, a real-time ASL recognition system was built with a ConvNet algorithm using real colouring images from a PC camera. The model is the first ASL recognition model to categorize a total of 26 letters, including (J & Z), with two new classes for space and delete, which was explored with new datasets. It was built to contain a wide diversity of attributes like different lightings, skin tones, backgrounds, and a wide variety of situations. The experimental results achieved a high accuracy of about 98.53% for the training and 98.84% for the validation. As well, the system displayed a high accuracy for all the datasets when new test data, which had not been used in the training, were introduced.


2020 ◽  
Vol 31 (1) ◽  
pp. 9-17

Recently, deep learning has been widely applying to speech and image recognition. Convolutional neural network (CNN) is one of the main categories to do image classifications with very high accuracy. In Android malware classification field, many works have been trying to convert Android malwares into “images” to make them well-matched with the CNN input to take advantage of the CNN model. The performance, however, is not significantly improved because simply converting malwares into images may lack several important features of the malwares. This paper proposes a method for improving the feature set of Android malware classification based on co-concurrence matrix (co-matrix). The co-matrix is established based on a list of raw features extracted from .apk files. The proposed feature can take the advantage of CNN while remaining important features of the Android malwares. Experimental results of CNN model conducted on a very popular Android malware dataset, Drebin, prove the feasibility of our proposed co-matrix feature.


2021 ◽  
Vol 40 ◽  
pp. 03004
Author(s):  
Rachana Patil ◽  
Vivek Patil ◽  
Abhishek Bahuguna ◽  
Gaurav Datkhile

Communicating with the person having hearing disability is always a major challenge. The work presented in paper is an exertion(extension) towards examining the difficulties in classification of characters in Indian Sign Language(ISL). Sign language is not enough for communication of people with hearing ability or people with speech disability. The gestures made by the people with disability gets mixed or disordered for someone who has never learnt this language. Communication should be in both ways. In this paper, we introduce a Sign Language recognition using Indian Sign Language.The user must be able to capture images of hand gestures using a web camera in this analysis, and the system must predict and show the name of the captured image. The captured image undergoes series of processing steps which include various Computer vision techniques such as the conversion to gray-scale, dilation and mask operation. Convolutional Neural Network (CNN) is used to train our model and identify the pictures. Our model has achieved accuracy about 95%


Sign in / Sign up

Export Citation Format

Share Document