Phonological priming in German Sign Language

Author(s):  
Anne Wienholz ◽  
Derya Nuhbalaoglu ◽  
Markus Steinbach ◽  
Annika Herrmann ◽  
Nivedita Mani

Abstract A number of studies provide evidence for a phonological priming effect in the recognition of single signs based on phonological parameters and that the specific phonological parameters modulated in the priming effect can influence the robustness of this effect. This eye tracking study on German Sign Language examined phonological priming effects at the sentence level, while varying the phonological relationship between prime-target sign pairs. We recorded participants’ eye movements while presenting videos of sentences containing either related or unrelated prime-target sign pairs, and pictures of the target and an unrelated distractor. We observed a phonological priming effect for sign pairs sharing handshape and movement while differing in location parameter. Taken together, the data suggest a difference in the contribution of sign parameters to sign recognition and that sub-lexical features influence sign language processing.

2019 ◽  
Author(s):  
Anne Wienholz ◽  
Derya Nuhbalaoglu ◽  
Markus Steinbach ◽  
Annika Herrmann ◽  
Nivedita Mani

Various studies provide evidence for a phonological priming effect in the recognition of single signs based on phonological parameters, i.e., handshape, location and movement. In addition, some of these studies show that phonological parameters influence this effect differently. The current eye tracking study on German Sign Language examined the presence of a phonological priming effect at the sentence level depending on the phonological relation of prime-target sign pairs. We recorded participants’ eye movements while presenting a video of sentences containing either related or unrelated prime-target sign pairs, and a picture of the target and the distractor. The data provided evidence for a phonological priming effect for sign pairs sharing handshape and movement while differing in location. Moreover, a difference between parameters in their contribution to sign recognition was suggested such that recognition was facilitated for signs sharing handshape, but was inhibited for signs sharing location. Showing that sub-lexical features influence sign language processing.


2014 ◽  
Vol 15 (1) ◽  
Author(s):  
Barbara Hänel-Faulhaber ◽  
Nils Skotara ◽  
Monique Kügow ◽  
Uta Salden ◽  
Davide Bottari ◽  
...  

Author(s):  
Shadman A. Khan ◽  
Zulfikar Ali Ansari ◽  
Riya Singh ◽  
Mohit Singh Rawat ◽  
Fiza Zafar Khan ◽  
...  

Artificial Intelligence (AI) technologies are new technologies with new complicated features emerging quickly. Technology adoption has been beneficial for many general models. The models help in train the voice user-interface assistance (Alexa, Cortona, Siri). Voice assistants are easy to use, and thus millions of devices incorporate them in households nowadays. The primary purpose of the sign language translator prototype is to reduce interaction barriers between deaf and mute. To overcome this problem, we have proposed a prototype. It is named sign language translator with Sign Recognition Intelligence which takes the user input in sign language and processes it, and returns the output in voice out load to the end-user.


2020 ◽  
Author(s):  
Patrick C. Trettenbrein ◽  
Nina-Kristin Pendzich ◽  
Jens-Michael Cramer ◽  
Markus Steinbach ◽  
Emiliano Zaccarella

Sign language offers a unique perspective on the human faculty of language by illustrating that linguistic abilities are not bound to speech and writing. In studies of spoken and written language processing, lexical variables such as, for example, age of acquisition have been found to play an important role, but such information is not as yet available for German Sign Language (Deutsche Gebärdensprache, DGS). Here, we present a set of norms for frequency, age of acquisition, and iconicity for more than 300 lexical DGS signs, derived from subjective ratings by 32 deaf signers. We also provide additional norms for iconicity and transparency for the same set of signs derived from ratings by 30 hearing non-signers. In addition to empirical norming data, the dataset includes machine-readable information about a sign’s correspondence in German and English, as well as annotations of lexico-semantic and phonological properties: one-handed vs. two-handed, place of articulation, most likely lexical class, animacy, verb type, (potential) homonymy, and potential dialectal variation. Finally, we include information about sign onset and offset for all stimulus clips from automated motion-tracking data. All norms, stimulus clips, data, as well as code used for analysis are made available through the Open Science Framework in the hope that they may prove to be useful to other researchers: https://osf.io/mz8j4/


Author(s):  
Patrick C. Trettenbrein ◽  
Nina-Kristin Pendzich ◽  
Jens-Michael Cramer ◽  
Markus Steinbach ◽  
Emiliano Zaccarella

AbstractSign language offers a unique perspective on the human faculty of language by illustrating that linguistic abilities are not bound to speech and writing. In studies of spoken and written language processing, lexical variables such as, for example, age of acquisition have been found to play an important role, but such information is not as yet available for German Sign Language (Deutsche Gebärdensprache, DGS). Here, we present a set of norms for frequency, age of acquisition, and iconicity for more than 300 lexical DGS signs, derived from subjective ratings by 32 deaf signers. We also provide additional norms for iconicity and transparency for the same set of signs derived from ratings by 30 hearing non-signers. In addition to empirical norming data, the dataset includes machine-readable information about a sign’s correspondence in German and English, as well as annotations of lexico-semantic and phonological properties: one-handed vs. two-handed, place of articulation, most likely lexical class, animacy, verb type, (potential) homonymy, and potential dialectal variation. Finally, we include information about sign onset and offset for all stimulus clips from automated motion-tracking data. All norms, stimulus clips, data, as well as code used for analysis are made available through the Open Science Framework in the hope that they may prove to be useful to other researchers: 10.17605/OSF.IO/MZ8J4


2020 ◽  
Vol 14 ◽  
Author(s):  
Vasu Mehra ◽  
Dhiraj Pandey ◽  
Aayush Rastogi ◽  
Aditya Singh ◽  
Harsh Preet Singh

Background:: People suffering from hearing and speaking disabilities have a few ways of communicating with other people. One of these is to communicate through the use of sign language. Objective:: Developing a system for sign language recognition becomes essential for deaf as well as a mute person. The recognition system acts as a translator between a disabled and an able person. This eliminates the hindrances in exchange of ideas. Most of the existing systems are very poorly designed with limited support for the needs of their day to day facilities. Methods:: The proposed system embedded with gesture recognition capability has been introduced here which extracts signs from a video sequence and displays them on screen. On the other hand, a speech to text as well as text to speech system is also introduced to further facilitate the grieved people. To get the best out of human computer relationship, the proposed solution consists of various cutting-edge technologies and Machine Learning based sign recognition models which have been trained by using Tensor Flow and Keras library. Result:: The proposed architecture works better than several gesture recognition techniques like background elimination and conversion to HSV because of sharply defined image provided to the model for classification. The results of testing indicate reliable recognition systems with high accuracy that includes most of the essential and necessary features for any deaf and dumb person in his/her day to day tasks. Conclusion:: It’s the need of current technological advances to develop reliable solutions which can be deployed to assist deaf and dumb people to adjust to normal life. Instead of focusing on a standalone technology, a plethora of them have been introduced in this proposed work. Proposed Sign Recognition System is based on feature extraction and classification. The trained model helps in identification of different gestures.


2021 ◽  
Vol 11 (8) ◽  
pp. 3439
Author(s):  
Debashis Das Chakladar ◽  
Pradeep Kumar ◽  
Shubham Mandal ◽  
Partha Pratim Roy ◽  
Masakazu Iwamura ◽  
...  

Sign language is a visual language for communication used by hearing-impaired people with the help of hand and finger movements. Indian Sign Language (ISL) is a well-developed and standard way of communication for hearing-impaired people living in India. However, other people who use spoken language always face difficulty while communicating with a hearing-impaired person due to lack of sign language knowledge. In this study, we have developed a 3D avatar-based sign language learning system that converts the input speech/text into corresponding sign movements for ISL. The system consists of three modules. Initially, the input speech is converted into an English sentence. Then, that English sentence is converted into the corresponding ISL sentence using the Natural Language Processing (NLP) technique. Finally, the motion of the 3D avatar is defined based on the ISL sentence. The translation module achieves a 10.50 SER (Sign Error Rate) score.


Author(s):  
Dang Van Thin ◽  
Ngan Luu-Thuy Nguyen ◽  
Tri Minh Truong ◽  
Lac Si Le ◽  
Duy Tin Vo

Aspect-based sentiment analysis has been studied in both research and industrial communities over recent years. For the low-resource languages, the standard benchmark corpora play an important role in the development of methods. In this article, we introduce two benchmark corpora with the largest sizes at sentence-level for two tasks: Aspect Category Detection and Aspect Polarity Classification in Vietnamese. Our corpora are annotated with high inter-annotator agreements for the restaurant and hotel domains. The release of our corpora would push forward the low-resource language processing community. In addition, we deploy and compare the effectiveness of supervised learning methods with a single and multi-task approach based on deep learning architectures. Experimental results on our corpora show that the multi-task approach based on BERT architecture outperforms the neural network architectures and the single approach. Our corpora and source code are published on this footnoted site. 1


2012 ◽  
Vol 50 (7) ◽  
pp. 1335-1346 ◽  
Author(s):  
Eva Gutiérrez ◽  
Oliver Müller ◽  
Cristina Baus ◽  
Manuel Carreiras

2006 ◽  
Vol 1 (2) ◽  
pp. 277-297 ◽  
Author(s):  
Elsa Spinelli ◽  
Fanny Meunier ◽  
Alix Seigneuric

In a cross-modal (auditory-visual) fragment priming study in French, we tested the hypothesis that gender information given by a gender-marked article (e.g. unmasculine or unefeminine) is used early in the recognition of the following word to discard gender-incongruent competitors. In four experiments, we compared lexical decision performances on targets primed by phonological information only (e.g. /kRa/-CRAPAUD /kRapo/; /to/-TOAD) or by phonological plus gender information given by a gender-marked article (e.g. unmasculine /kra/-CRAPAUD; a /to/-TOAD). In all experiments, we found a phonological priming effect that was not modulated by the presence of gender context, whether gender-marked articles were congruent (Experiments 1, 2, and 3) or incongruent (Experiment 4) with the target gender. Moreover, phonological facilitation was not modulated by the presence of gender context, whether gender-marked articles allowed exclusion of less frequent competitors (Experiment 1) or more frequent ones (Experiments 2 and 3). We concluded that gender information extracted from a preceding gender-marked determiner is not used early in the process of spoken word recognition and that it may be used in a later selection process.


Sign in / Sign up

Export Citation Format

Share Document