Sign language translation based on syntactic and semantic analysis

1994 ◽  
Vol 25 (6) ◽  
pp. 91-103
Author(s):  
Masahiro Abe ◽  
Hiroshi Sakou ◽  
Hirohiko Sagawa
Author(s):  
Mikhail G. Grif ◽  
Olga O. Korolkova ◽  
Yuliya S. Manueva

The paper analyses current computer Sign Language translation systems. Their advantages and disadvantages are detected. The main drawback is the lack of original text semantic analysis module capable of solving the task of disambiguation. A general scheme of translation system from phonic Russian to Russian Sign language including a module for semantic analysis is presented. It includes a block of source code analysis, developed by the authors, responsible for handling the semantic component of the Russian language. The semantic module relies on Tuzov’s dictionary. The semantic analysis algorithm is also described. The text analysis is completed when each word gets only one semantic description thus solving the problem of ambiguity. The most important developments of the semantic analysis module include the following: expanded collection of gestures, parsing of complex sentences, account in the algorithm analyses predicates classifier of Russian Sign Language. Testing of algorithm is made. The article compares the existing systems of computer translation from phonic to the sign language. The advantages and disadvantages of the considered systems are revealed and a conclusion is made about the need to take into account the semantic aspect of the translation process. A technology of semantic analysis is suggested. The model to choose an adequate meaning of a polysemic word or homonym on the basis of the automatic text processing system «Dialing» is described. Examples of the use of the software are given. The questions of testing the working capacity of the semantic analysis module are given due attention too. To enhance its efficiency, the system of semantic analysis was added to the translation system «Surdophone». To verify the efficiency of the semantic module’s operation, a comparison is made with the definition of some words’ semantic meanings by the systems «Yandex Translator» and «Google Translator». The present system showed its advantage in more complex cases. Also, the base of gestures of the RSL whose names are homonyms and polysemic words of the Russian language, were added and the features of their performance were revealed.


Author(s):  
B. T. ZHUSSUPOVA ◽  
◽  
D. ZH. ALIPPAYEVA ◽  
S. A. KUDUBAYEVA ◽  
◽  
...  

his article discusses the development of a semantic dictionary of the Kazakh language for a computer translation system from Kazakh to Kazakh sign language, which will take into account the semantics of the Kazakh language and the Kazakh sign language. The semantic dictionary of the Kazakh language serves as the basis of computer translation technology from the Kazakh language to the Kazakh sign language. In the future, it will allow semantic analysis of the source text. The authors of the article analyzed and selected the available dictionaries of the Kazakh language used in the development of the semantic dictionary database. Dictionaries of the Kazakh language provide an opportunity for computer-based sign language translation of the Kazakh sign language. The article also presents the possibility of using L. S. Dimskis notation to develop a dictionary of the structure of gestures of the Kazakh sign language. The prospect of its inclusion in the database of semantic dictionary is revealed. And also revealed the need for a dictionary of gestures in the development of automated sign language translation system as a whole, taking into account its effectiveness and the possibility of full practical use.


Author(s):  
Marion Kaczmarek ◽  
Michael Filhol

AbstractProfessional Sign Language translators, unlike their text-to-text counterparts, are not equipped with computer-assisted translation (CAT) software. Those softwares are meant to ease the translators’ tasks. No prior study as been conducted on this topic, and we aim at specifying such a software. To do so, we based our study on the professional Sign Language translators’ practices and needs. The aim of this paper is to identify the necessary steps in the text-to-sign translation process. By filming and interviewing professionals for both objective and subjective data, we build a list of tasks and see if they are systematic and performed in a definite order. Finally, we reflect on how CAT tools could assist those tasks, how to adapt the existing tools to Sign Language and what is necessary to add in order to fit the needs of Sign Language translation. In the long term, we plan to develop a first prototype of CAT software for sign languages.


Author(s):  
Anjali Kanvinde ◽  
Abhishek Revadekar ◽  
Mahesh Tamse ◽  
Dhananjay R. Kalbande ◽  
Nida Bakereywala

Author(s):  
HyeonJung Park ◽  
Youngki Lee ◽  
JeongGil Ko

In this work we present SUGO, a depth video-based system for translating sign language to text using a smartphone's front camera. While exploiting depth-only videos offer benefits such as being less privacy-invasive compared to using RGB videos, it introduces new challenges which include dealing with low video resolutions and the sensors' sensitiveness towards user motion. We overcome these challenges by diversifying our sign language video dataset to be robust to various usage scenarios via data augmentation and design a set of schemes to emphasize human gestures from the input images for effective sign detection. The inference engine of SUGO is based on a 3-dimensional convolutional neural network (3DCNN) to classify a sequence of video frames as a pre-trained word. Furthermore, the overall operations are designed to be light-weight so that sign language translation takes place in real-time using only the resources available on a smartphone, with no help from cloud servers nor external sensing components. Specifically, to train and test SUGO, we collect sign language data from 20 individuals for 50 Korean Sign Language words, summing up to a dataset of ~5,000 sign gestures and collect additional in-the-wild data to evaluate the performance of SUGO in real-world usage scenarios with different lighting conditions and daily activities. Comprehensively, our extensive evaluations show that SUGO can properly classify sign words with an accuracy of up to 91% and also suggest that the system is suitable (in terms of resource usage, latency, and environmental robustness) to enable a fully mobile solution for sign language translation.


Author(s):  
Necati Cihan Camgoz ◽  
Simon Hadfield ◽  
Oscar Koller ◽  
Hermann Ney ◽  
Richard Bowden

Sign in / Sign up

Export Citation Format

Share Document