scholarly journals Phonological Proximity in Costa Rican Sign Language

Electronics ◽  
2020 ◽  
Vol 9 (8) ◽  
pp. 1302
Author(s):  
Luis Naranjo-Zeledón ◽  
Mario Chacón-Rivas ◽  
Jesús Peral ◽  
Antonio Ferrández

The study of phonological proximity makes it possible to establish a basis for future decision-making in the treatment of sign languages. Knowing how close a set of signs are allows the interested party to decide more easily its study by clustering, as well as the teaching of the language to third parties based on similarities. In addition, it lays the foundation for strengthening disambiguation modules in automatic recognition systems. To the best of our knowledge, this is the first study of its kind for Costa Rican Sign Language (LESCO, for its Spanish acronym), and forms the basis for one of the modules of the already operational system of sign and speech editing called the International Platform for Sign Language Edition (PIELS). A database of 2665 signs, grouped into eight contexts, is used, and a comparison of similarity measures is made, using standard statistical formulas to measure their degree of correlation. This corpus will be especially useful in machine learning approaches. In this work, we have proposed an analysis of different similarity measures between signs in order to find out the phonological proximity between them. After analyzing the results obtained, we can conclude that LESCO is a sign language with high levels of phonological proximity, particularly in the orientation and location components, but they are noticeably lower in the form component. We have also concluded as an outstanding contribution of our research that automatic recognition systems can take as a basis for their first prototypes the contexts or sign domains that map to clusters with lower levels of similarity. As mentioned, the results obtained have multiple applications such as in the teaching area or the Natural Language Processing area for automatic recognition tasks.

Electronics ◽  
2019 ◽  
Vol 8 (9) ◽  
pp. 1047 ◽  
Author(s):  
Luis Naranjo-Zeledón ◽  
Jesús Peral ◽  
Antonio Ferrández ◽  
Mario Chacón-Rivas

Sign languages (SL) are the first language for most deaf people. Consequently, bidirectional communication among deaf and non-deaf people has always been a challenging issue. Sign language usage has increased due to inclusion policies and general public agreement, which must then become evident in information technologies, in the many facets that comprise sign language understanding and its computational treatment. In this study, we conduct a thorough systematic mapping of translation-enabling technologies for sign languages. This mapping has considered the most recommended guidelines for systematic reviews, i.e., those pertaining software engineering, since there is a need to account for interdisciplinary areas of accessibility, human computer interaction, natural language processing, and education, all of them part of ACM (Association for Computing Machinery) computing classification system directly related to software engineering. An ongoing development of a software tool called SYMPLE (SYstematic Mapping and Parallel Loading Engine) facilitated the querying and construction of a base set of candidate studies. A great diversity of topics has been studied over the last 25 years or so, but this systematic mapping allows for comfortable visualization of predominant areas, venues, top authors, and different measures of concentration and dispersion. The systematic review clearly shows a large number of classifications and subclassifications interspersed over time. This is an area of study in which there is much interest, with a basically steady level of scientific publications over the last decade, concentrated mainly in the European continent. The publications by country, nevertheless, usually favor their local sign language.


2021 ◽  
Vol 6 ◽  
Author(s):  
Karen Emmorey

The first 40 years of research on the neurobiology of sign languages (1960–2000) established that the same key left hemisphere brain regions support both signed and spoken languages, based primarily on evidence from signers with brain injury and at the end of the 20th century, based on evidence from emerging functional neuroimaging technologies (positron emission tomography and fMRI). Building on this earlier work, this review focuses on what we have learned about the neurobiology of sign languages in the last 15–20 years, what controversies remain unresolved, and directions for future research. Production and comprehension processes are addressed separately in order to capture whether and how output and input differences between sign and speech impact the neural substrates supporting language. In addition, the review includes aspects of language that are unique to sign languages, such as pervasive lexical iconicity, fingerspelling, linguistic facial expressions, and depictive classifier constructions. Summary sketches of the neural networks supporting sign language production and comprehension are provided with the hope that these will inspire future research as we begin to develop a more complete neurobiological model of sign language processing.


Author(s):  
Dmitry Ryumin ◽  
Ildar Kagirov ◽  
Alexander Axyonov ◽  
Alexey Karpov

Introduction: Currently, the recognition of gestures and sign languages is one of the most intensively developing areas in computer vision and applied linguistics. The results of current investigations are applied in a wide range of areas, from sign language translation to gesture-based interfaces. In that regard, various systems and methods for the analysis of gestural data are being developed. Purpose: A detailed review of methods and a comparative analysis of current approaches in automatic recognition of gestures and sign languages. Results: The main gesture recognition problems are the following: detection of articulators (mainly hands), pose estimation and segmentation of gestures in the flow of speech. The authors conclude that the use of two-stream convolutional and recurrent neural network architectures is generally promising for efficient extraction and processing of spatial and temporal features, thus solving the problem of dynamic gestures and coarticulations. This solution, however, heavily depends on the quality and availability of data sets. Practical relevance: This review can be considered a contribution to the study of rapidly developing sign language recognition, irrespective to particular natural sign languages. The results of the work can be used in the development of software systems for automatic gesture and sign language recognition.


Algorithms ◽  
2020 ◽  
Vol 13 (12) ◽  
pp. 310
Author(s):  
Valentin Belissen ◽  
Annelies Braffort ◽  
Michèle Gouiffès

Sign Languages (SLs) are visual–gestural languages that have developed naturally in deaf communities. They are based on the use of lexical signs, that is, conventionalized units, as well as highly iconic structures, i.e., when the form of an utterance and the meaning it carries are not independent. Although most research in automatic Sign Language Recognition (SLR) has focused on lexical signs, we wish to broaden this perspective and consider the recognition of non-conventionalized iconic and syntactic elements. We propose the use of corpora made by linguists like the finely and consistently annotated dialogue corpus Dicta-Sign-LSF-v2. We then redefined the problem of automatic SLR as the recognition of linguistic descriptors, with carefully thought out performance metrics. Moreover, we developed a compact and generalizable representation of signers in videos by parallel processing of the hands, face and upper body, then an adapted learning architecture based on a Recurrent Convolutional Neural Network (RCNN). Through a study focused on the recognition of four linguistic descriptors, we show the soundness of the proposed approach and pave the way for a wider understanding of Continuous Sign Language Recognition (CSLR).


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1739
Author(s):  
Hamzah Luqman ◽  
El-Sayed M. El-Alfy

Sign languages are the main visual communication medium between hard-hearing people and their societies. Similar to spoken languages, they are not universal and vary from region to region, but they are relatively under-resourced. Arabic sign language (ArSL) is one of these languages that has attracted increasing attention in the research community. However, most of the existing and available works on sign language recognition systems focus on manual gestures, ignoring other non-manual information needed for other language signals such as facial expressions. One of the main challenges of not considering these modalities is the lack of suitable datasets. In this paper, we propose a new multi-modality ArSL dataset that integrates various types of modalities. It consists of 6748 video samples of fifty signs performed by four signers and collected using Kinect V2 sensors. This dataset will be freely available for researchers to develop and benchmark their techniques for further advancement of the field. In addition, we evaluated the fusion of spatial and temporal features of different modalities, manual and non-manual, for sign language recognition using the state-of-the-art deep learning techniques. This fusion boosted the accuracy of the recognition system at the signer-independent mode by 3.6% compared with manual gestures.


2015 ◽  
Vol 112 (37) ◽  
pp. 11684-11689 ◽  
Author(s):  
Aaron J. Newman ◽  
Ted Supalla ◽  
Nina Fernandez ◽  
Elissa L. Newport ◽  
Daphne Bavelier

Sign languages used by deaf communities around the world possess the same structural and organizational properties as spoken languages: In particular, they are richly expressive and also tightly grammatically constrained. They therefore offer the opportunity to investigate the extent to which the neural organization for language is modality independent, as well as to identify ways in which modality influences this organization. The fact that sign languages share the visual–manual modality with a nonlinguistic symbolic communicative system—gesture—further allows us to investigate where the boundaries lie between language and symbolic communication more generally. In the present study, we had three goals: to investigate the neural processing of linguistic structure in American Sign Language (using verbs of motion classifier constructions, which may lie at the boundary between language and gesture); to determine whether we could dissociate the brain systems involved in deriving meaning from symbolic communication (including both language and gesture) from those specifically engaged by linguistically structured content (sign language); and to assess whether sign language experience influences the neural systems used for understanding nonlinguistic gesture. The results demonstrated that even sign language constructions that appear on the surface to be similar to gesture are processed within the left-lateralized frontal-temporal network used for spoken languages—supporting claims that these constructions are linguistically structured. Moreover, although nonsigners engage regions involved in human action perception to process communicative, symbolic gestures, signers instead engage parts of the language-processing network—demonstrating an influence of experience on the perception of nonlinguistic stimuli.


2021 ◽  
Vol 105 ◽  
pp. 263-271
Author(s):  
Muhammad Yasir ◽  
Chen Li ◽  
Muhammad Amir Malik

Sign languages display the same linguistic characteristics as oral languages and utilize the same language services. Sign language processing solutions provide a communication link for persons with hearing impairments and healthy persons. Without these icons' ability to understand, deaf children experience several challenges in learning social norms and cannot meet adults to exchange knowledge. Parents find it challenging to express their messages to their deaf children and not hear their children. This paper focused on establishing Urdu sign language to reduce the communication barrier between ordinary folks and physically impaired people. The present study observed the Urdu Sign Language in deaf children. In this paper, the process of detecting Urdu sign language alphabets is proposed. All the 37 alphabets are identified by using KNN, ANN, and SVM classifiers. Through these alphabets, the teachers at schools and the parents at home can communicate efficiently with their deaf children. Histogram of Gradient technique is used for feature extraction. Urdu Alphabetic are identified. Maximum accuracy is obtained by using a KNN classifier that was 99, which is a significant contribution. Our proposed results are comparable to the state of the art techniques.


2010 ◽  
Vol 13 (2) ◽  
pp. 183-199 ◽  
Author(s):  
Evie Malaia ◽  
Ronnie B. Wilbur

Early acquisition of a natural language, signed or spoken, has been shown to fundamentally impact both one’s ability to use the first language, and the ability to learn subsequent languages later in life (Mayberry 2007, 2009). This review summarizes a number of recent neuroimaging studies in order to detail the neural bases of sign language acquisition. The logic of this review is to present research reports that contribute to the bigger picture showing that people who acquire a natural language, spoken or signed, in the normal way possess specialized linguistic abilities and brain functions that are missing or deficient in people whose exposure to natural language is delayed or absent. Comparing the function of each brain region with regards to the processing of spoken and sign languages, we attempt to clarify the role each region plays in language processing in general, and to outline the challenges and remaining questions in understanding language processing in the brain.


Author(s):  
D. Ryumin ◽  
A. A. Karpov

In this article, we propose a new method for parametric representation of human’s lips region. The functional diagram of the method is described and implementation details with the explanation of its key stages and features are given. The results of automatic detection of the regions of interest are illustrated. A speed of the method work using several computers with different performances is reported. This universal method allows applying parametrical representation of the speaker’s lipsfor the tasks of biometrics, computer vision, machine learning, and automatic recognition of face, elements of sign languages, and audio-visual speech, including lip-reading.


2017 ◽  
Author(s):  
Sabrina Jaeger ◽  
Simone Fulle ◽  
Samo Turk

Inspired by natural language processing techniques we here introduce Mol2vec which is an unsupervised machine learning approach to learn vector representations of molecular substructures. Similarly, to the Word2vec models where vectors of closely related words are in close proximity in the vector space, Mol2vec learns vector representations of molecular substructures that are pointing in similar directions for chemically related substructures. Compounds can finally be encoded as vectors by summing up vectors of the individual substructures and, for instance, feed into supervised machine learning approaches to predict compound properties. The underlying substructure vector embeddings are obtained by training an unsupervised machine learning approach on a so-called corpus of compounds that consists of all available chemical matter. The resulting Mol2vec model is pre-trained once, yields dense vector representations and overcomes drawbacks of common compound feature representations such as sparseness and bit collisions. The prediction capabilities are demonstrated on several compound property and bioactivity data sets and compared with results obtained for Morgan fingerprints as reference compound representation. Mol2vec can be easily combined with ProtVec, which employs the same Word2vec concept on protein sequences, resulting in a proteochemometric approach that is alignment independent and can be thus also easily used for proteins with low sequence similarities.


Sign in / Sign up

Export Citation Format

Share Document