Communication of Agent, Object, and Indirect Object in Signed and Spoken Languages

1974 ◽  
Vol 39 (3) ◽  
pp. 1151-1158 ◽  
Author(s):  
Loreli Bode

A picture-description task was used to study the comparative effectiveness of communication between deaf subjects using American Sign Language, and hearing subjects using spoken English: 16 deaf, native users of American Sign Language and 16 hearing, native users of English. All subjects were university undergraduate students. Within the two groups, paired subjects alternately described pictures to each other. Pictures illustrated three different characters assuming in turn the roles of agent, object, and indirect object. Following a description by Subject1, Subject2 selected the picture he or she thought Subject1 had described, from a set of 6 pictures containing the described picture. The frequency of errors did not differ significantly between signing and speaking subjects.

2021 ◽  
pp. 095679762199155
Author(s):  
Amanda R. Brown ◽  
Wim Pouw ◽  
Diane Brentari ◽  
Susan Goldin-Meadow

When we use our hands to estimate the length of a stick in the Müller-Lyer illusion, we are highly susceptible to the illusion. But when we prepare to act on sticks under the same conditions, we are significantly less susceptible. Here, we asked whether people are susceptible to illusion when they use their hands not to act on objects but to describe them in spontaneous co-speech gestures or conventional sign languages of the deaf. Thirty-two English speakers and 13 American Sign Language signers used their hands to act on, estimate the length of, and describe sticks eliciting the Müller-Lyer illusion. For both gesture and sign, the magnitude of illusion in the description task was smaller than the magnitude of illusion in the estimation task and not different from the magnitude of illusion in the action task. The mechanisms responsible for producing gesture in speech and sign thus appear to operate not on percepts involved in estimation but on percepts derived from the way we act on objects.


1999 ◽  
Vol 26 (2) ◽  
pp. 321-338 ◽  
Author(s):  
E. DAYLENE RICHMOND-WELTY ◽  
PATRICIA SIPLE

Signed languages make unique demands on gaze during communication. Bilingual children acquiring both a spoken and a signed language must learn to differentiate gaze use for their two languages. Gaze during utterances was examined for a set of bilingual-bimodal twins acquiring spoken English and American Sign Language (ASL) and a set of monolingual twins acquiring ASL when the twins were aged 2;0, 3;0 and 4;0. The bilingual-bimodal twins differentiated their languages by age 3;0. Like the monolingual ASL twins, the bilingual-bimodal twins established mutual gaze at the beginning of their ASL utterances and either maintained gaze to the end or alternated gaze to include a terminal look. In contrast, like children acquiring spoken English monolingually, the bilingual-bimodal twins established mutual gaze infrequently for their spoken English utterances. When they did establish mutual gaze, it occurred later in their spoken utterances and they tended to look away before the end.


Author(s):  
Simon Hooper ◽  
Charles Miller ◽  
Susan Rose ◽  
Michael M. Rook

In this paper, the authors examine how instructors used an online assessment environment designed to evaluate the performance of undergraduate students enrolled in American Sign Language (ASL) courses. 640 undergraduate ASL students at a large Midwestern university participated in this study. The findings suggest that instructors varied greatly in the manner in which they used the e-assessment system both in terms of the amount of time spent evaluating student assessments and in the proportion of total assessments scored. Furthermore, students’ responses to an open-ended survey on their experiences with the system generated useful insight to guide future design. Finally, implications for the design and integration of world language e-assessment environments are discussed.


2021 ◽  
Author(s):  
Zed Sevcikova Sehyr ◽  
Karen Emmorey

Picture naming tasks provide critical data for theories of lexical representation and retrieval and have been performed successfully in sign languages. However, the specific influences of lexical or phonological factors and stimulus properties on sign retrieval are poorly understood. To examine lexical retrieval in American Sign Language (ASL), we conducted a timed picture naming study using 524 pictures (272 objects and 251 actions). We also compared ASL naming with previous data for spoken English for a subset of 425 pictures. Deaf ASL signers named object pictures faster and more consistently than action pictures, as previously reported for English speakers. Lexical frequency, iconicity, better name agreement, and lower phonological complexity each facilitated naming RTs. RTs were also faster for pictures named with shorter signs (measured by average response duration). Target name agreement was higher for pictures with more iconic and shorter ASL names. The visual complexity of pictures slowed RTs and decreased target name agreement. RTs and target name agreement were correlated for ASL and English, but agreement was lower for ASL, possibly due to the English bias of the pictures. RTs were faster for ASL, which we attributed to a smaller lexicon. Overall, the results suggest that models of lexical retrieval developed for spoken languages can be adopted for signed languages, with the exception that iconicity should be included as a factor. The open-source picture naming dataset for ASL serves as an important, first-of-its-kind resource for researchers, educators, or clinicians for a variety of research, instructional, or assessment purposes.


1979 ◽  
Vol 44 (2) ◽  
pp. 196-208 ◽  
Author(s):  
Michael L. Jones ◽  
Stephen P. Quigley

This longitudinal study investigated the acquisition of question formation in spoken English and American Sign Language (ASL) by two young hearing children of deaf parents. The linguistic environment of the children included varying amounts of exposure and interaction with normal speech and with the nonstandard speech of their deaf parents. This atypical speech environment did not impede the children’s acquisition of English question forms. The two children also acquired question forms in ASL that are similar to those produced by deaf children of deaf parents. The two languages, ASL and English, developed in parallel fashion in the two children, and the two systems did not interfere with each other. This dual language development is illustrated by utterances in which the children communicated a sentence in spoken English and ASL simultaneously, with normal English structure in the spoken version and sign language structure in the ASL version.


2011 ◽  
Author(s):  
M. Leonard ◽  
N. Ferjan Ramirez ◽  
C. Torres ◽  
M. Hatrak ◽  
R. Mayberry ◽  
...  

2018 ◽  
Author(s):  
Leslie Pertz ◽  
Missy Plegue ◽  
Kathleen Diehl ◽  
Philip Zazove ◽  
Michael McKee

Sign in / Sign up

Export Citation Format

Share Document