scholarly journals Phonological priming in German Sign Language: An eye tracking study using the Visual World Paradigm

2019 ◽  
Author(s):  
Anne Wienholz ◽  
Derya Nuhbalaoglu ◽  
Markus Steinbach ◽  
Annika Herrmann ◽  
Nivedita Mani

Various studies provide evidence for a phonological priming effect in the recognition of single signs based on phonological parameters, i.e., handshape, location and movement. In addition, some of these studies show that phonological parameters influence this effect differently. The current eye tracking study on German Sign Language examined the presence of a phonological priming effect at the sentence level depending on the phonological relation of prime-target sign pairs. We recorded participants’ eye movements while presenting a video of sentences containing either related or unrelated prime-target sign pairs, and a picture of the target and the distractor. The data provided evidence for a phonological priming effect for sign pairs sharing handshape and movement while differing in location. Moreover, a difference between parameters in their contribution to sign recognition was suggested such that recognition was facilitated for signs sharing handshape, but was inhibited for signs sharing location. Showing that sub-lexical features influence sign language processing.

Author(s):  
Anne Wienholz ◽  
Derya Nuhbalaoglu ◽  
Markus Steinbach ◽  
Annika Herrmann ◽  
Nivedita Mani

Abstract A number of studies provide evidence for a phonological priming effect in the recognition of single signs based on phonological parameters and that the specific phonological parameters modulated in the priming effect can influence the robustness of this effect. This eye tracking study on German Sign Language examined phonological priming effects at the sentence level, while varying the phonological relationship between prime-target sign pairs. We recorded participants’ eye movements while presenting videos of sentences containing either related or unrelated prime-target sign pairs, and pictures of the target and an unrelated distractor. We observed a phonological priming effect for sign pairs sharing handshape and movement while differing in location parameter. Taken together, the data suggest a difference in the contribution of sign parameters to sign recognition and that sub-lexical features influence sign language processing.


2014 ◽  
Vol 15 (1) ◽  
Author(s):  
Barbara Hänel-Faulhaber ◽  
Nils Skotara ◽  
Monique Kügow ◽  
Uta Salden ◽  
Davide Bottari ◽  
...  

Author(s):  
Shadman A. Khan ◽  
Zulfikar Ali Ansari ◽  
Riya Singh ◽  
Mohit Singh Rawat ◽  
Fiza Zafar Khan ◽  
...  

Artificial Intelligence (AI) technologies are new technologies with new complicated features emerging quickly. Technology adoption has been beneficial for many general models. The models help in train the voice user-interface assistance (Alexa, Cortona, Siri). Voice assistants are easy to use, and thus millions of devices incorporate them in households nowadays. The primary purpose of the sign language translator prototype is to reduce interaction barriers between deaf and mute. To overcome this problem, we have proposed a prototype. It is named sign language translator with Sign Recognition Intelligence which takes the user input in sign language and processes it, and returns the output in voice out load to the end-user.


2020 ◽  
Author(s):  
Patrick C. Trettenbrein ◽  
Nina-Kristin Pendzich ◽  
Jens-Michael Cramer ◽  
Markus Steinbach ◽  
Emiliano Zaccarella

Sign language offers a unique perspective on the human faculty of language by illustrating that linguistic abilities are not bound to speech and writing. In studies of spoken and written language processing, lexical variables such as, for example, age of acquisition have been found to play an important role, but such information is not as yet available for German Sign Language (Deutsche Gebärdensprache, DGS). Here, we present a set of norms for frequency, age of acquisition, and iconicity for more than 300 lexical DGS signs, derived from subjective ratings by 32 deaf signers. We also provide additional norms for iconicity and transparency for the same set of signs derived from ratings by 30 hearing non-signers. In addition to empirical norming data, the dataset includes machine-readable information about a sign’s correspondence in German and English, as well as annotations of lexico-semantic and phonological properties: one-handed vs. two-handed, place of articulation, most likely lexical class, animacy, verb type, (potential) homonymy, and potential dialectal variation. Finally, we include information about sign onset and offset for all stimulus clips from automated motion-tracking data. All norms, stimulus clips, data, as well as code used for analysis are made available through the Open Science Framework in the hope that they may prove to be useful to other researchers: https://osf.io/mz8j4/


Author(s):  
Patrick C. Trettenbrein ◽  
Nina-Kristin Pendzich ◽  
Jens-Michael Cramer ◽  
Markus Steinbach ◽  
Emiliano Zaccarella

AbstractSign language offers a unique perspective on the human faculty of language by illustrating that linguistic abilities are not bound to speech and writing. In studies of spoken and written language processing, lexical variables such as, for example, age of acquisition have been found to play an important role, but such information is not as yet available for German Sign Language (Deutsche Gebärdensprache, DGS). Here, we present a set of norms for frequency, age of acquisition, and iconicity for more than 300 lexical DGS signs, derived from subjective ratings by 32 deaf signers. We also provide additional norms for iconicity and transparency for the same set of signs derived from ratings by 30 hearing non-signers. In addition to empirical norming data, the dataset includes machine-readable information about a sign’s correspondence in German and English, as well as annotations of lexico-semantic and phonological properties: one-handed vs. two-handed, place of articulation, most likely lexical class, animacy, verb type, (potential) homonymy, and potential dialectal variation. Finally, we include information about sign onset and offset for all stimulus clips from automated motion-tracking data. All norms, stimulus clips, data, as well as code used for analysis are made available through the Open Science Framework in the hope that they may prove to be useful to other researchers: 10.17605/OSF.IO/MZ8J4


2011 ◽  
Vol 14 (1) ◽  
pp. 76-93 ◽  
Author(s):  
Jana Hosemann

Eye gaze as a nonmanual component of sign languages has not yet been investigated in much detail. The idea that eye gaze may function as an agreement marker was brought forward by Bahan (1996) and Neidle et al. (2000), who argued that eye gaze is an independent agreement marker occurring with all three verb types (plain verbs, spatial verbs, and agreeing verbs) in American Sign Language (ASL). Thompson et al. (2006) conducted an eye-tracking experiment to investigate the interdependency between eye gaze and ASL verb agreement in depth. Their results indicate that eye gaze in ASL functions as an agreement marker only when accompanying manual agreement, marking the object in agreeing verbs and the locative argument in spatial verbs. They conclude that eye gaze is part of an agreement circumfix. Subsequently, I conducted an eye-tracking experiment to investigate the correlation of eye gaze and manual agreement for verbs in German Sign Language (DGS). The results differ from Thompson et al.’s, since eye gaze with agreeing verbs in the DGS data did not occur as systematically as in ASL. Nevertheless, an analysis of verb duration and the spreading of a correlating eye gaze suggests that there is a dependency relation between eye gaze and manual agreement.


2020 ◽  
Author(s):  
Kun Sun

Expectations or predictions about upcoming content play an important role during language comprehension and processing. One important aspect of recent studies of language comprehension and processing concerns the estimation of the upcoming words in a sentence or discourse. Many studies have used eye-tracking data to explore computational and cognitive models for contextual word predictions and word processing. Eye-tracking data has previously been widely explored with a view to investigating the factors that influence word prediction. However, these studies are problematic on several levels, including the stimuli, corpora, statistical tools they applied. Although various computational models have been proposed for simulating contextual word predictions, past studies usually preferred to use a single computational model. The disadvantage of this is that it often cannot give an adequate account of cognitive processing in language comprehension. To avoid these problems, this study draws upon a massive natural and coherent discourse as stimuli in collecting the data on reading time. This study trains two state-of-art computational models (surprisal and semantic (dis)similarity from word vectors by linear discriminative learning (LDL)), measuring knowledge of both the syntagmatic and paradigmatic structure of language. We develop a `dynamic approach' to compute semantic (dis)similarity. It is the first time that these two computational models have been merged. Models are evaluated using advanced statistical methods. Meanwhile, in order to test the efficiency of our approach, one recently developed cosine method of computing semantic (dis)similarity based on word vectors data adopted is used to compare with our `dynamic' approach. The two computational and fixed-effect statistical models can be used to cross-verify the findings, thus ensuring that the result is reliable. All results support that surprisal and semantic similarity are opposed in the prediction of the reading time of words although both can make good predictions. Additionally, our `dynamic' approach performs better than the popular cosine method. The findings of this study are therefore of significance with regard to acquiring a better understanding how humans process words in a real-world context and how they make predictions in language cognition and processing.


2020 ◽  
Vol 14 ◽  
Author(s):  
Vasu Mehra ◽  
Dhiraj Pandey ◽  
Aayush Rastogi ◽  
Aditya Singh ◽  
Harsh Preet Singh

Background:: People suffering from hearing and speaking disabilities have a few ways of communicating with other people. One of these is to communicate through the use of sign language. Objective:: Developing a system for sign language recognition becomes essential for deaf as well as a mute person. The recognition system acts as a translator between a disabled and an able person. This eliminates the hindrances in exchange of ideas. Most of the existing systems are very poorly designed with limited support for the needs of their day to day facilities. Methods:: The proposed system embedded with gesture recognition capability has been introduced here which extracts signs from a video sequence and displays them on screen. On the other hand, a speech to text as well as text to speech system is also introduced to further facilitate the grieved people. To get the best out of human computer relationship, the proposed solution consists of various cutting-edge technologies and Machine Learning based sign recognition models which have been trained by using Tensor Flow and Keras library. Result:: The proposed architecture works better than several gesture recognition techniques like background elimination and conversion to HSV because of sharply defined image provided to the model for classification. The results of testing indicate reliable recognition systems with high accuracy that includes most of the essential and necessary features for any deaf and dumb person in his/her day to day tasks. Conclusion:: It’s the need of current technological advances to develop reliable solutions which can be deployed to assist deaf and dumb people to adjust to normal life. Instead of focusing on a standalone technology, a plethora of them have been introduced in this proposed work. Proposed Sign Recognition System is based on feature extraction and classification. The trained model helps in identification of different gestures.


2021 ◽  
Vol 11 (8) ◽  
pp. 3439
Author(s):  
Debashis Das Chakladar ◽  
Pradeep Kumar ◽  
Shubham Mandal ◽  
Partha Pratim Roy ◽  
Masakazu Iwamura ◽  
...  

Sign language is a visual language for communication used by hearing-impaired people with the help of hand and finger movements. Indian Sign Language (ISL) is a well-developed and standard way of communication for hearing-impaired people living in India. However, other people who use spoken language always face difficulty while communicating with a hearing-impaired person due to lack of sign language knowledge. In this study, we have developed a 3D avatar-based sign language learning system that converts the input speech/text into corresponding sign movements for ISL. The system consists of three modules. Initially, the input speech is converted into an English sentence. Then, that English sentence is converted into the corresponding ISL sentence using the Natural Language Processing (NLP) technique. Finally, the motion of the 3D avatar is defined based on the ISL sentence. The translation module achieves a 10.50 SER (Sign Error Rate) score.


Author(s):  
Dang Van Thin ◽  
Ngan Luu-Thuy Nguyen ◽  
Tri Minh Truong ◽  
Lac Si Le ◽  
Duy Tin Vo

Aspect-based sentiment analysis has been studied in both research and industrial communities over recent years. For the low-resource languages, the standard benchmark corpora play an important role in the development of methods. In this article, we introduce two benchmark corpora with the largest sizes at sentence-level for two tasks: Aspect Category Detection and Aspect Polarity Classification in Vietnamese. Our corpora are annotated with high inter-annotator agreements for the restaurant and hotel domains. The release of our corpora would push forward the low-resource language processing community. In addition, we deploy and compare the effectiveness of supervised learning methods with a single and multi-task approach based on deep learning architectures. Experimental results on our corpora show that the multi-task approach based on BERT architecture outperforms the neural network architectures and the single approach. Our corpora and source code are published on this footnoted site. 1


Sign in / Sign up

Export Citation Format

Share Document