scholarly journals Second Language Working Memory Deficits and Plasticity in Hearing Bimodal Learners of Sign Language

2015 ◽  
Vol 19 (2) ◽  
pp. 128-148 ◽  
Author(s):  
Joshua Williams ◽  
Isabelle Darcy ◽  
Sharlene Newman

AbstractLittle is known about the acquisition of another language modality on second language (L2) working memory (WM) capacity. Differential indexing within the WM system based on language modality may explain differences in performance on WM tasks in sign and spoken language. We investigated the effect of language modality (sign versus spoken) on L2 WM capacity. Results indicated reduced L2 WM span relative to first language span for both L2 learners of Spanish and American Sign Language (ASL). Importantly, ASL learners had lower L2 WM spans than Spanish learners. Additionally, ASL learners increased their L2 WM spans as a function of proficiency, whereas Spanish learners did not. This pattern of results demonstrated that acquiring another language modality disadvantages ASL learners. We posited that this disadvantage arises out of an inability to correctly and efficiently allocate linguistic information to the visuospatial sketchpad due to L1-related indexing bias.

Languages ◽  
2019 ◽  
Vol 4 (4) ◽  
pp. 90
Author(s):  
Kim B. Kurz ◽  
Kellie Mullaney ◽  
Corrine Occhino

Constructed action is a cover term used in signed language linguistics to describe multi-functional constructions which encode perspective-taking and viewpoint. Within constructed action, viewpoint constructions serve to create discourse coherence by allowing signers to share perspectives and psychological states. Character, observer, and blended viewpoint constructions have been well documented in signed language literature in Deaf signers. However, little is known about hearing second language learners’ use of constructed action or about the acquisition and use of viewpoint constructions. We investigate the acquisition of viewpoint constructions in 11 college students acquiring American Sign Language (ASL) as a second language in a second modality (M2L2). Participants viewed video clips from the cartoon Canary Row and were asked to “retell the story as if you were telling it to a deaf friend”. We analyzed the signed narratives for time spent in character, observer, and blended viewpoints. Our results show that despite predictions of an overall increase in use of all types of viewpoint constructions, students varied in their time spent in observer and character viewpoints, while blended viewpoint was rarely observed. We frame our preliminary findings within the context of M2L2 learning, briefly discussing how gestural strategies used in multimodal speech-gesture constructions may influence learning trajectories.


1998 ◽  
Vol 20 (1) ◽  
pp. 124-125
Author(s):  
Timothy Reagan

American Sign Language (ASL), both as the focus of scholarly study and as an increasingly popular foreign-language option for many secondary and university level students, has made remarkable strides during recent years. With respect to the linguistics of ASL, there has been a veritable revolution in our understanding of the nature, structure, and complexity of the language since the publication of William Stokoe's landmark Sign Language Structure in 1960. Works on both theoretical aspects of the linguistics of ASL and on the sociolinguistics of the Deaf community now abound, and the overall quality of such works is impressively high. Also widely available now are textbooks designed to teach ASL as a second language. Such textbooks vary dramatically in quality, ranging from phrasebook and lexical guides to very thorough and up-to-date works focusing on communicative competence in ASL.


2014 ◽  
Vol 26 (3) ◽  
pp. 1015-1026 ◽  
Author(s):  
Naja Ferjan Ramirez ◽  
Matthew K. Leonard ◽  
Tristan S. Davenport ◽  
Christina Torres ◽  
Eric Halgren ◽  
...  

2012 ◽  
Vol 15 (2) ◽  
pp. 402-412 ◽  
Author(s):  
DIANE BRENTARI ◽  
MARIE A. NADOLSKE ◽  
GEORGE WOLFORD

In this paper the prosodic structure of American Sign Language (ASL) narratives is analyzed in deaf native signers (L1-D), hearing native signers (L1-H), and highly proficient hearing second language signers (L2-H). The results of this study show that the prosodic patterns used by these groups are associated both with their ASL language experience (L1 or L2) and with their hearing status (deaf or hearing), suggesting that experience using co-speech gesture (i.e. gesturing while speaking) may have some effect on the prosodic cues used by hearing signers, similar to the effects of the prosodic structure of an L1 on an L2.


2017 ◽  
Vol 2 ◽  
pp. 14 ◽  
Author(s):  
Marjorie Herbert ◽  
Acrisio Pires

The audiologically deaf members of the American Deaf community display bilingual competence in American Sign Language (ASL) and English, although their language acquisition trajectories often involve delayed exposure to one or both languages. There is a great deal of variation in terms of production among these signers, ranging from very ASL-typical to productions that seem to display heavy English influence. The latter, mixed productions, coined “Contact Signing” by Lucas & Valli (1992), could be representative of a type of codeswitching, referred to as ‘code-blending’ in sign language-spoken language contexts (e.g. Baker & Van den Bogaerde 2008), in which bilinguals invoke knowledge of their two grammars in concert, or these productions could be more like a mixed language, in which a third grammar, distinct from both ASL and English, constrains them. We argue, based on the analysis of our corpus of naturalistic data collected in an all-deaf sociolinguistic environment, that Contact Signing provides evidence for code-blending, given the distribution of English vs. ASL-based language properties in the production data from the participants in our study.


Target ◽  
1995 ◽  
Vol 7 (1) ◽  
pp. 135-149 ◽  
Author(s):  
William P. Isham

Abstract Research using interpreters who work with signed languages can aid us in understanding the cognitive processes of interpretation in general. Using American Sign Language (ASL) as an example, the nature of signed languages is outlined first. Then the difference between signed languages and manual codes for spoken languages is delineated, and it is argued that these two manners of communicating through the visual channel offer a unique research opportunity. Finally, an example from recent research is used to demonstrate how comparisons between spoken-language interpreters and signed-language interpreters can be used to test hypotheses regarding interpretation.


Sign in / Sign up

Export Citation Format

Share Document