scholarly journals Computer-assisted sign language translation: a study of translators’ practice to specify CAT software

Author(s):  
Marion Kaczmarek ◽  
Michael Filhol

AbstractProfessional Sign Language translators, unlike their text-to-text counterparts, are not equipped with computer-assisted translation (CAT) software. Those softwares are meant to ease the translators’ tasks. No prior study as been conducted on this topic, and we aim at specifying such a software. To do so, we based our study on the professional Sign Language translators’ practices and needs. The aim of this paper is to identify the necessary steps in the text-to-sign translation process. By filming and interviewing professionals for both objective and subjective data, we build a list of tasks and see if they are systematic and performed in a definite order. Finally, we reflect on how CAT tools could assist those tasks, how to adapt the existing tools to Sign Language and what is necessary to add in order to fit the needs of Sign Language translation. In the long term, we plan to develop a first prototype of CAT software for sign languages.

2020 ◽  
Vol 61 (12) ◽  
pp. 32-36
Author(s):  
Hafiza Mobil Abdinova ◽  
◽  
Vusala Mazahir Huseynova ◽  

The article is devoted to the main functions of the English translation process. The translation process is carried out with many functions. The main purpose of this article is to study and analyze the functions of the translation process. In this regard, types of translation such as written and oral translation, sequential translation, simultaneous translation, whispered translation, sign language translation are of great interest. The article is mainly examined the types of translation and discussed their applications in writing and orally. Key words: translation, oral translation, sequential translation, simultaneous translation


2006 ◽  
Vol 9 (1-2) ◽  
pp. 33-70 ◽  
Author(s):  
Ninoslava Šarac Kuhn ◽  
Tamara Alibašić Ciciliani ◽  
Ronnie B. Wilbur

We present an initial description of the sign parameters in Croatian Sign Language. We show that HZJ has a comparable phonological structure to other known sign languages, including basic sign parts, such as location, handshape, movement, orientation, and nonmanual characteristics. Our discussion follows the Prosodic Model (Brentari 1998), in which sign structure is separated into those characteristics which do not change during sign formation (inherent features) and those that do (prosodic features). We present the model, along with discussion of the notion of constraints on sign formation, and apply it to HZJ to the extent that we are able to do so. We identify an inventory of the relevant handshapes, orientations, locations, and movements in HZJ, and a partial inventory of nonmanuals. One interesting feature of the HZJ environment is the existence of two fingerspelling alphabets, a one-handed and a two-handed system. We also provide additional analytical steps that can be taken after the initial inventory has been constructed. Both minimal pairs and constraints on sign formation are especially useful for demonstrating the linguistic systematicity of sign languages and separating them from gesture and mime.


2021 ◽  
pp. 61-73
Author(s):  
Tommi Jantunen ◽  
Rebekah Rousi ◽  
Päivi Rainò ◽  
Markku Turunen ◽  
Mohammad Moeen Valipoor ◽  
...  

This article discusses the prerequisites for the machine translation of sign languages. The topic is complex, including questions relating to technology, interaction design, linguistics and culture. At the moment, despite the affordances provided by the technology, automated translation between signed and spoken languages – or between sign languages – is not possible. The very need of such translation and its associated technology can also be questioned. Yet, we believe that contributing to the improvement of sign language detection, processing and even sign language translation to spoken languages in the future is a matter that should not be abandoned. However, we argue that this work should focus on all necessary aspects of sign languages and sign language user communities. Thus, a more diverse and critical perspective towards these issues is needed in order to avoid generalisations and bias that is often manifested within dominant research paradigms particularly in the fields of spoken language research and speech community.


Author(s):  
Dmitry Ryumin ◽  
Ildar Kagirov ◽  
Alexander Axyonov ◽  
Alexey Karpov

Introduction: Currently, the recognition of gestures and sign languages is one of the most intensively developing areas in computer vision and applied linguistics. The results of current investigations are applied in a wide range of areas, from sign language translation to gesture-based interfaces. In that regard, various systems and methods for the analysis of gestural data are being developed. Purpose: A detailed review of methods and a comparative analysis of current approaches in automatic recognition of gestures and sign languages. Results: The main gesture recognition problems are the following: detection of articulators (mainly hands), pose estimation and segmentation of gestures in the flow of speech. The authors conclude that the use of two-stream convolutional and recurrent neural network architectures is generally promising for efficient extraction and processing of spatial and temporal features, thus solving the problem of dynamic gestures and coarticulations. This solution, however, heavily depends on the quality and availability of data sets. Practical relevance: This review can be considered a contribution to the study of rapidly developing sign language recognition, irrespective to particular natural sign languages. The results of the work can be used in the development of software systems for automatic gesture and sign language recognition.


Author(s):  
Dan Guo ◽  
Shuo Wang ◽  
Qi Tian ◽  
Meng Wang

The sign language translation (SLT) which aims at translating a sign language video into natural language is a weakly supervised task, given that there is no exact mapping relationship between visual actions and textual words in a sentence label. To align the sign language actions and translate them into the respective words automatically, this paper proposes a dense temporal convolution network, termed DenseTCN which captures the actions in hierarchical views. Within this network, a temporal convolution (TC) is designed to learn the short-term correlation among adjacent features and further extended to a dense hierarchical structure. In the kth TC layer, we integrate the outputs of all preceding layers together: (1) The TC in a deeper layer essentially has larger receptive fields, which captures long-term temporal context by the hierarchical content transition. (2) The integration addresses the SLT problem by different views, including embedded short-term and extended longterm sequential learning. Finally, we adopt the CTC loss and a fusion strategy to learn the featurewise classification and generate the translated sentence. The experimental results on two popular sign language benchmarks, i.e. PHOENIX and USTCConSents, demonstrate the effectiveness of our proposed method in terms of various measurements.


2020 ◽  
Vol 37 (4) ◽  
pp. 571-608
Author(s):  
Diane Brentari ◽  
Laura Horton ◽  
Susan Goldin-Meadow

Abstract Two differences between signed and spoken languages that have been widely discussed in the literature are: the degree to which morphology is expressed simultaneously (rather than sequentially), and the degree to which iconicity is used, particularly in predicates of motion and location, often referred to as classifier predicates. In this paper we analyze a set of properties marking agency and number in four sign languages for their crosslinguistic similarities and differences regarding simultaneity and iconicity. Data from American Sign Language (ASL), Italian Sign Language (LIS), British Sign Language (BSL), and Hong Kong Sign Language (HKSL) are analyzed. We find that iconic, cognitive, phonological, and morphological factors contribute to the distribution of these properties. We conduct two analyses—one of verbs and one of verb phrases. The analysis of classifier verbs shows that, as expected, all four languages exhibit many common formal and iconic properties in the expression of agency and number. The analysis of classifier verb phrases (VPs)—particularly, multiple-verb predicates—reveals (a) that it is grammatical in all four languages to express agency and number within a single verb, but also (b) that there is crosslinguistic variation in expressing agency and number across the four languages. We argue that this variation is motivated by how each language prioritizes, or ranks, several constraints. The rankings can be captured in Optimality Theory. Some constraints in this account, such as a constraint to be redundant, are found in all information systems and might be considered non-linguistic; however, the variation in constraint ranking in verb phrases reveals the grammatical and arbitrary nature of linguistic systems.


2021 ◽  
Vol 2 (3) ◽  
Author(s):  
Gustaf Halvardsson ◽  
Johanna Peterson ◽  
César Soto-Valero ◽  
Benoit Baudry

AbstractThe automatic interpretation of sign languages is a challenging task, as it requires the usage of high-level vision and high-level motion processing systems for providing accurate image perception. In this paper, we use Convolutional Neural Networks (CNNs) and transfer learning to make computers able to interpret signs of the Swedish Sign Language (SSL) hand alphabet. Our model consists of the implementation of a pre-trained InceptionV3 network, and the usage of the mini-batch gradient descent optimization algorithm. We rely on transfer learning during the pre-training of the model and its data. The final accuracy of the model, based on 8 study subjects and 9400 images, is 85%. Our results indicate that the usage of CNNs is a promising approach to interpret sign languages, and transfer learning can be used to achieve high testing accuracy despite using a small training dataset. Furthermore, we describe the implementation details of our model to interpret signs as a user-friendly web application.


Author(s):  
Anjali Kanvinde ◽  
Abhishek Revadekar ◽  
Mahesh Tamse ◽  
Dhananjay R. Kalbande ◽  
Nida Bakereywala

2021 ◽  
Vol 14 (2) ◽  
pp. 1-45
Author(s):  
Danielle Bragg ◽  
Naomi Caselli ◽  
Julie A. Hochgesang ◽  
Matt Huenerfauth ◽  
Leah Katz-Hernandez ◽  
...  

Sign language datasets are essential to developing many sign language technologies. In particular, datasets are required for training artificial intelligence (AI) and machine learning (ML) systems. Though the idea of using AI/ML for sign languages is not new, technology has now advanced to a point where developing such sign language technologies is becoming increasingly tractable. This critical juncture provides an opportunity to be thoughtful about an array of Fairness, Accountability, Transparency, and Ethics (FATE) considerations. Sign language datasets typically contain recordings of people signing, which is highly personal. The rights and responsibilities of the parties involved in data collection and storage are also complex and involve individual data contributors, data collectors or owners, and data users who may interact through a variety of exchange and access mechanisms. Deaf community members (and signers, more generally) are also central stakeholders in any end applications of sign language data. The centrality of sign language to deaf culture identity, coupled with a history of oppression, makes usage by technologists particularly sensitive. This piece presents many of these issues that characterize working with sign language AI datasets, based on the authors’ experiences living, working, and studying in this space.


Sign in / Sign up

Export Citation Format

Share Document