Differentiating the use of gaze in bilingual-bimodal language acquisition: a comparison of two sets of twins with deaf parents

1999 ◽  
Vol 26 (2) ◽  
pp. 321-338 ◽  
Author(s):  
E. DAYLENE RICHMOND-WELTY ◽  
PATRICIA SIPLE

Signed languages make unique demands on gaze during communication. Bilingual children acquiring both a spoken and a signed language must learn to differentiate gaze use for their two languages. Gaze during utterances was examined for a set of bilingual-bimodal twins acquiring spoken English and American Sign Language (ASL) and a set of monolingual twins acquiring ASL when the twins were aged 2;0, 3;0 and 4;0. The bilingual-bimodal twins differentiated their languages by age 3;0. Like the monolingual ASL twins, the bilingual-bimodal twins established mutual gaze at the beginning of their ASL utterances and either maintained gaze to the end or alternated gaze to include a terminal look. In contrast, like children acquiring spoken English monolingually, the bilingual-bimodal twins established mutual gaze infrequently for their spoken English utterances. When they did establish mutual gaze, it occurred later in their spoken utterances and they tended to look away before the end.

2020 ◽  
Vol 40 (5-6) ◽  
pp. 585-591
Author(s):  
Lynn Hou ◽  
Jill P. Morford

The visual-manual modality of sign languages renders them a unique test case for language acquisition and processing theories. In this commentary the authors describe evidence from signed languages, and ask whether it is consistent with Ambridge’s proposal. The evidence includes recent research on collocations in American Sign Language that reveal collocational frequency effects and patterns that do not constitute syntactic constituents. While these collocations appear to resist fully abstract schematization, further consideration of how speakers create exemplars and how they link exemplar clouds based on tokens and how much abstraction is involved in their creation is warranted.


Target ◽  
1995 ◽  
Vol 7 (1) ◽  
pp. 135-149 ◽  
Author(s):  
William P. Isham

Abstract Research using interpreters who work with signed languages can aid us in understanding the cognitive processes of interpretation in general. Using American Sign Language (ASL) as an example, the nature of signed languages is outlined first. Then the difference between signed languages and manual codes for spoken languages is delineated, and it is argued that these two manners of communicating through the visual channel offer a unique research opportunity. Finally, an example from recent research is used to demonstrate how comparisons between spoken-language interpreters and signed-language interpreters can be used to test hypotheses regarding interpretation.


2021 ◽  
pp. 026765832110376
Author(s):  
Emily Saunders ◽  
David Quinto-Pozos

Studies have shown that iconicity can provide a benefit to non-signers during the learning of single signs, but other aspects of signed messages that might also be beneficial have received less attention. In particular, do other features of signed languages help support comprehension of a message during the process of language learning? The following exploratory study investigates the comprehension of sentences in two signed and two spoken languages by non-signers and by American Sign Language (ASL) learners. The design allows for the examination of message comprehension, with a comparison of unknown spoken and signed languages. Details of the stimulus sentences are provided in order to contextualize features of the signing that might be providing benefits for comprehension. Included in this analysis are aspects of the sentences that are iconic and spatially deictic – some of which resemble common gestural forms of communication. The results indicate that iconicity and referential points in signed language likely assist with comprehension of sentences, even for non-signers and for a signed language that the ASL signers have not studied.


1979 ◽  
Vol 44 (2) ◽  
pp. 196-208 ◽  
Author(s):  
Michael L. Jones ◽  
Stephen P. Quigley

This longitudinal study investigated the acquisition of question formation in spoken English and American Sign Language (ASL) by two young hearing children of deaf parents. The linguistic environment of the children included varying amounts of exposure and interaction with normal speech and with the nonstandard speech of their deaf parents. This atypical speech environment did not impede the children’s acquisition of English question forms. The two children also acquired question forms in ASL that are similar to those produced by deaf children of deaf parents. The two languages, ASL and English, developed in parallel fashion in the two children, and the two systems did not interfere with each other. This dual language development is illustrated by utterances in which the children communicated a sentence in spoken English and ASL simultaneously, with normal English structure in the spoken version and sign language structure in the ASL version.


2021 ◽  
Author(s):  
Lorna C Quandt ◽  
Athena Willis ◽  
Carly Leannah

Signed language users communicate in a wide array of sub-optimal environments, such as in dim lighting or from a distance. While fingerspelling is a common and essential part of signed languages, the perception of fingerspelling in varying visual environments is not well understood. Signed languages such as American Sign Language (ASL) rely on visuospatial information that combines hand and bodily movements, facial expressions, and fingerspelling. Linguistic information in ASL is conveyed with movement and spatial patterning, which lends itself well to using dynamic Point Light Display (PLD) stimuli to represent sign language movements. We created PLD videos of fingerspelled location names. The location names were either Real (e.g., KUWAIT) or Pseudo-names (e.g., CLARTAND), and the PLDs showed either a High or a Low number of markers. In an online study, Deaf and Hearing ASL users (total N = 283) watched 27 PLD stimulus videos that varied by Realness and Number of Markers. We calculated accuracy and confidence scores in response to each video. We predicted that when signers see ASL fingerspelled letter strings in a suboptimal visual environment, language experience in ASL will be positively correlated with accuracy and self-rated confidence scores. We also predicted that Real location names would be understood better than Pseudo names. Our findings show that participants were more accurate and confident in response to Real place names than Pseudo names and for stimuli with High rather than Low markers. We also discovered a significant interaction between Age and Realness, which shows that as people age, they can better use outside world knowledge to inform their fingerspelling success. Finally, we examined the accuracy and confidence in fingerspelling perception in sub-groups of people who had learned ASL before the age of four. Studying the relationship between language experience with PLD fingerspelling perception allows us to explore how hearing status, ASL fluency levels, and age of language acquisition affect the core abilities of understanding fingerspelling.


Languages ◽  
2019 ◽  
Vol 4 (4) ◽  
pp. 90
Author(s):  
Kim B. Kurz ◽  
Kellie Mullaney ◽  
Corrine Occhino

Constructed action is a cover term used in signed language linguistics to describe multi-functional constructions which encode perspective-taking and viewpoint. Within constructed action, viewpoint constructions serve to create discourse coherence by allowing signers to share perspectives and psychological states. Character, observer, and blended viewpoint constructions have been well documented in signed language literature in Deaf signers. However, little is known about hearing second language learners’ use of constructed action or about the acquisition and use of viewpoint constructions. We investigate the acquisition of viewpoint constructions in 11 college students acquiring American Sign Language (ASL) as a second language in a second modality (M2L2). Participants viewed video clips from the cartoon Canary Row and were asked to “retell the story as if you were telling it to a deaf friend”. We analyzed the signed narratives for time spent in character, observer, and blended viewpoints. Our results show that despite predictions of an overall increase in use of all types of viewpoint constructions, students varied in their time spent in observer and character viewpoints, while blended viewpoint was rarely observed. We frame our preliminary findings within the context of M2L2 learning, briefly discussing how gestural strategies used in multimodal speech-gesture constructions may influence learning trajectories.


Gesture ◽  
2013 ◽  
Vol 13 (1) ◽  
pp. 1-27 ◽  
Author(s):  
Rachel Sutton-Spence ◽  
Donna Jo Napoli

Sign Language poetry is especially valued for its presentation of strong visual images. Here, we explore the highly visual signs that British Sign Language and American Sign Language poets create as part of the ‘classifier system’ of their languages. Signed languages, as they create visually-motivated messages, utilise categoricity (more traditionally considered ‘language’) and analogy (more traditionally considered extra-linguistic and the domain of ‘gesture’). Classifiers in sign languages arguably show both these characteristics (Oviedo, 2004). In our discussion of sign language poetry, we see that poets take elements that are widely understood to be highly visual, closely representing their referents, and make them even more highly visual — so going beyond categorisation and into new areas of analogue.


Gesture ◽  
2001 ◽  
Vol 1 (1) ◽  
pp. 51-72 ◽  
Author(s):  
Evelyn McClave

This paper presents evidence of non-manual gestures in American Sign Language (ASL). The types of gestures identified are identical to non-manual, spontaneous gestures used by hearing non-signers which suggests that the gestures co-occurring with ASL signs are borrowings from hearing culture. A comparison of direct quotes in ASL with spontaneous movements of hearing non-signers suggests a history of borrowing and eventual grammaticization in ASL of features previously thought to be unique to signed languages. The electronic edition of this article includes audio-visial data.


Author(s):  
David Quinto-Pozos ◽  
Robert Adam

Language contact of various kinds is the norm in Deaf communities throughout the world, and this allows for exploration of the role of the different kinds of modality (be it spoken, signed or written, or a combination of these) and the channel of communication in language contact. Drawing its evidence largely from instances of American Sign Language (ASL) this chapter addresses and illustrates several of these themes: sign-speech contact, sign-writing contact, and sign-sign contact, examining instances of borrowing and bilingualism between some of these modalities, and compares these to contact between hearing users of spoken languages, specifically in this case American English.


Sign in / Sign up

Export Citation Format

Share Document