scholarly journals The neural correlates for spatial language: Perspective-dependent and -independent relationships in American Sign Language and spoken English

2021 ◽  
Vol 223 ◽  
pp. 105044
Author(s):  
Karen Emmorey ◽  
Chris Brozdowski ◽  
Stephen McCullough
NeuroImage ◽  
2005 ◽  
Vol 24 (3) ◽  
pp. 832-840 ◽  
Author(s):  
Karen Emmorey ◽  
Thomas Grabowski ◽  
Stephen McCullough ◽  
Laura L.B. Ponto ◽  
Richard D. Hichwa ◽  
...  

1999 ◽  
Vol 26 (2) ◽  
pp. 321-338 ◽  
Author(s):  
E. DAYLENE RICHMOND-WELTY ◽  
PATRICIA SIPLE

Signed languages make unique demands on gaze during communication. Bilingual children acquiring both a spoken and a signed language must learn to differentiate gaze use for their two languages. Gaze during utterances was examined for a set of bilingual-bimodal twins acquiring spoken English and American Sign Language (ASL) and a set of monolingual twins acquiring ASL when the twins were aged 2;0, 3;0 and 4;0. The bilingual-bimodal twins differentiated their languages by age 3;0. Like the monolingual ASL twins, the bilingual-bimodal twins established mutual gaze at the beginning of their ASL utterances and either maintained gaze to the end or alternated gaze to include a terminal look. In contrast, like children acquiring spoken English monolingually, the bilingual-bimodal twins established mutual gaze infrequently for their spoken English utterances. When they did establish mutual gaze, it occurred later in their spoken utterances and they tended to look away before the end.


1974 ◽  
Vol 39 (3) ◽  
pp. 1151-1158 ◽  
Author(s):  
Loreli Bode

A picture-description task was used to study the comparative effectiveness of communication between deaf subjects using American Sign Language, and hearing subjects using spoken English: 16 deaf, native users of American Sign Language and 16 hearing, native users of English. All subjects were university undergraduate students. Within the two groups, paired subjects alternately described pictures to each other. Pictures illustrated three different characters assuming in turn the roles of agent, object, and indirect object. Following a description by Subject1, Subject2 selected the picture he or she thought Subject1 had described, from a set of 6 pictures containing the described picture. The frequency of errors did not differ significantly between signing and speaking subjects.


NeuroImage ◽  
2002 ◽  
Vol 17 (2) ◽  
pp. 812-824 ◽  
Author(s):  
Karen Emmorey ◽  
Hanna Damasio ◽  
Stephen McCullough ◽  
Thomas Grabowski ◽  
Laura L.B. Ponto ◽  
...  

2021 ◽  
Author(s):  
Zed Sevcikova Sehyr ◽  
Karen Emmorey

Picture naming tasks provide critical data for theories of lexical representation and retrieval and have been performed successfully in sign languages. However, the specific influences of lexical or phonological factors and stimulus properties on sign retrieval are poorly understood. To examine lexical retrieval in American Sign Language (ASL), we conducted a timed picture naming study using 524 pictures (272 objects and 251 actions). We also compared ASL naming with previous data for spoken English for a subset of 425 pictures. Deaf ASL signers named object pictures faster and more consistently than action pictures, as previously reported for English speakers. Lexical frequency, iconicity, better name agreement, and lower phonological complexity each facilitated naming RTs. RTs were also faster for pictures named with shorter signs (measured by average response duration). Target name agreement was higher for pictures with more iconic and shorter ASL names. The visual complexity of pictures slowed RTs and decreased target name agreement. RTs and target name agreement were correlated for ASL and English, but agreement was lower for ASL, possibly due to the English bias of the pictures. RTs were faster for ASL, which we attributed to a smaller lexicon. Overall, the results suggest that models of lexical retrieval developed for spoken languages can be adopted for signed languages, with the exception that iconicity should be included as a factor. The open-source picture naming dataset for ASL serves as an important, first-of-its-kind resource for researchers, educators, or clinicians for a variety of research, instructional, or assessment purposes.


1995 ◽  
Vol 1088 (1) ◽  
pp. 255-288 ◽  
Author(s):  
Karen Emmorey ◽  
Shannon Casey

1979 ◽  
Vol 44 (2) ◽  
pp. 196-208 ◽  
Author(s):  
Michael L. Jones ◽  
Stephen P. Quigley

This longitudinal study investigated the acquisition of question formation in spoken English and American Sign Language (ASL) by two young hearing children of deaf parents. The linguistic environment of the children included varying amounts of exposure and interaction with normal speech and with the nonstandard speech of their deaf parents. This atypical speech environment did not impede the children’s acquisition of English question forms. The two children also acquired question forms in ASL that are similar to those produced by deaf children of deaf parents. The two languages, ASL and English, developed in parallel fashion in the two children, and the two systems did not interfere with each other. This dual language development is illustrated by utterances in which the children communicated a sentence in spoken English and ASL simultaneously, with normal English structure in the spoken version and sign language structure in the ASL version.


2011 ◽  
Author(s):  
M. Leonard ◽  
N. Ferjan Ramirez ◽  
C. Torres ◽  
M. Hatrak ◽  
R. Mayberry ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document