scholarly journals DOMAIN BOUNDED ENGLISH TO INDIAN SIGN LANGUAGE TRANSLATION MODEL

Author(s):  
SYED FARAZ ALI ◽  
GOURI SANKAR MISHRA ◽  
ASHOK KUMAR SAHOO

This is a proposal for the English text to Indian Sign Language translation model wherein the system will accept the input text and then translates the given words in sequence by making an avatar to display signs of each word. The translation here is corpus based. There is direct mapping between the English and ISL text. Since it is very inefficient to make signs for each word our domain is bounded by certain criteria for which the translator translates the given text. Like, the system which we propose is for railway reservation counters for enquiry.

2019 ◽  
Vol 9 (13) ◽  
pp. 2683 ◽  
Author(s):  
Sang-Ki Ko ◽  
Chang Jo Kim ◽  
Hyedong Jung ◽  
Choongsang Cho

We propose a sign language translation system based on human keypoint estimation. It is well-known that many problems in the field of computer vision require a massive dataset to train deep neural network models. The situation is even worse when it comes to the sign language translation problem as it is far more difficult to collect high-quality training data. In this paper, we introduce the KETI (Korea Electronics Technology Institute) sign language dataset, which consists of 14,672 videos of high resolution and quality. Considering the fact that each country has a different and unique sign language, the KETI sign language dataset can be the starting point for further research on the Korean sign language translation. Using the KETI sign language dataset, we develop a neural network model for translating sign videos into natural language sentences by utilizing the human keypoints extracted from the face, hands, and body parts. The obtained human keypoint vector is normalized by the mean and standard deviation of the keypoints and used as input to our translation model based on the sequence-to-sequence architecture. As a result, we show that our approach is robust even when the size of the training data is not sufficient. Our translation model achieved 93.28% (55.28%, respectively) translation accuracy on the validation set (test set, respectively) for 105 sentences that can be used in emergency situations. We compared several types of our neural sign translation models based on different attention mechanisms in terms of classical metrics for measuring the translation performance.


2021 ◽  
Vol 9 (1) ◽  
pp. 182-203
Author(s):  
Muthu Mariappan H ◽  
Dr Gomathi V

Dynamic hand gesture recognition is a challenging task of Human-Computer Interaction (HCI) and Computer Vision. The potential application areas of gesture recognition include sign language translation, video gaming, video surveillance, robotics, and gesture-controlled home appliances. In the proposed research, gesture recognition is applied to recognize sign language words from real-time videos. Classifying the actions from video sequences requires both spatial and temporal features. The proposed system handles the former by the Convolutional Neural Network (CNN), which is the core of several computer vision solutions and the latter by the Recurrent Neural Network (RNN), which is more efficient in handling the sequences of movements. Thus, the real-time Indian sign language (ISL) recognition system is developed using the hybrid CNN-RNN architecture. The system is trained with the proposed CasTalk-ISL dataset. The ultimate purpose of the presented research is to deploy a real-time sign language translator to break the hurdles present in the communication between hearing-impaired people and normal people. The developed system achieves 95.99% top-1 accuracy and 99.46% top-3 accuracy on the test dataset. The obtained results outperform the existing approaches using various deep models on different datasets.


The hearing challenged community all over world face difficulties to communicate with others. Machine translation has been one of the prominent technologies to facilitate a two way communication to the deaf and hard of hearing community all over the world. We have explored and formulated the fundamental rules of Indian Sign Language and implemented as a translation mechanism of English Text to Indian sign Language glosses. The structure of the source text is identified and transferred to the target language according to the formulated rules and sub rules. The intermediate phases of the transfer process is also mentioned in this research work.


2021 ◽  
Author(s):  
Hemang Monga ◽  
Jatin Bhutani ◽  
Muskan Ahuja ◽  
Nikita Maid ◽  
Himangi Pande

Indian Sign Language is one of the most important and widely used forms of communication for people with speaking and hearing impairments. Many people or communities have attempted to create systems that read the sign language symbols and convert the same to text, but text or audio to sign language is still infrequent. This project mainly focuses on developing a translating system consisting of many modules that take English audio and convert the input to English text, which is further parsed to structure grammar representation on which grammar rules of Indian Sign Language are applied. Stop words are removed from the reordered sentence. Since the Indian Sign Language does not support conjugation in words, stemming and lemmatization will transform the provided word into its root or original word. Then all the individual words are checked in a dictionary holding videos of each word. If the system does not find words in the dictionary, then the most suitable synonym replaces them. The system proposed by us is inventive as the current systems are bound to direct conversion of words into Indian Sign Language on-the-other-hand our system aims to convert the sentences in Indian Sign Language grammar and effectively display it to the user.


2020 ◽  
Author(s):  
Yash Patil ◽  
Sahil Krishnadas ◽  
Adya Kastwar ◽  
Sujata Kulkarni

2019 ◽  
Vol 7 (2) ◽  
pp. 43
Author(s):  
MALHOTRA POOJA ◽  
K. MANIAR CHIRAG ◽  
V. SANKPAL NIKHIL ◽  
R. THAKKAR HARDIK ◽  
◽  
...  

Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


Sign in / Sign up

Export Citation Format

Share Document