SPATIAL, TEMPORAL, AND SEMANTIC MODELS FOR AMERICAN SIGN LANGUAGE GENERATION: IMPLICATIONS FOR GESTURE GENERATION

2008 ◽  
Vol 02 (01) ◽  
pp. 21-45 ◽  
Author(s):  
MATT HUENERFAUTH

Software to generate animations of American Sign Language (ASL) has important accessibility benefits for the significant number of deaf adults with low levels of written language literacy. We have implemented a prototype software system to generate an important subset of ASL phenomena called "classifier predicates," complex and spatially descriptive types of sentences. The output of this prototype system has been evaluated by native ASL signers. Our generator includes several novel models of 3D space, spatial semantics, and temporal coordination motivated by linguistic properties of ASL. These classifier predicates have several similarities to iconic gestures that often co-occur with spoken language; these two phenomena will be compared. This article explores implications of the design of our system for research in multimodal gesture generation systems. A conceptual model of multimodal communication signals is introduced to show how computational linguistic research on ASL relates to the field of multimodal natural language processing.

2021 ◽  
Author(s):  
R. D. Rusiru Sewwantha ◽  
T. N. D. S. Ginige

Sign Language is the use of various gestures and symbols for communication. It is mainly used by disabled people with communication difficulties due to their speech or hearing impediments. Due to the lack of knowledge on sign language, natural language speakers like us, are not able to communicate with such people. As a result, a communication gap is created between sign language users and natural language speakers. It should also be noted that sign language differs from country to country. With American sign language being the most commonly used, in Sri Lanka, we use Sri Lankan/Sinhala sign language. In this research, the authors propose a mobile solution using a Region Based Convolutional Neural Network for object detection to reduce the communication gap between the sign users and language speakers by identifying and interpreting Sinhala sign language to Sinhala text using Natural Language Processing (NLP). The system is able to identify and interpret still gesture signs in real-time using the trained model. The proposed solution uses object detection for the identification of the signs.


2014 ◽  
Vol 26 (3) ◽  
pp. 1015-1026 ◽  
Author(s):  
Naja Ferjan Ramirez ◽  
Matthew K. Leonard ◽  
Tristan S. Davenport ◽  
Christina Torres ◽  
Eric Halgren ◽  
...  

1981 ◽  
Vol 46 (4) ◽  
pp. 388-397 ◽  
Author(s):  
Penny L. Griffith ◽  
Jacques H. Robinson ◽  
John M. Panagos

Three groups of subjects differing in age, language experience, and familiarity with American Sign Language were compared on three tasks regarding the perception of iconicity in signs from American Sign Language. Subjects were asked to guess the meaning of signs, to rate signs for iconicity, and to state connections between signs and their meaning in English. Results showed that hearing college students, deaf adults, and hearing first-grade children perform similarly on tasks regarding iconicity. Results suggest a psycholinguistic definition of iconicity based on association values, rather than physical resemblances between signs and real-world referents.


10.1038/nn775 ◽  
2001 ◽  
Vol 5 (1) ◽  
pp. 76-80 ◽  
Author(s):  
Aaron J. Newman ◽  
Daphne Bavelier ◽  
David Corina ◽  
Peter Jezzard ◽  
Helen J. Neville

2018 ◽  
Vol 31 (11) ◽  
pp. 1215-1220 ◽  
Author(s):  
Abbi N Simons ◽  
Christopher J Moreland ◽  
Poorna Kushalnagar

2011 ◽  
Vol 4 (3) ◽  
pp. 192-197 ◽  
Author(s):  
Michael McKee ◽  
Deirdre Schlehofer ◽  
Jessica Cuculick ◽  
Matthew Starr ◽  
Scott Smith ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document