Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language

2016 ◽  
Vol 46 (1) ◽  
pp. 211-225 ◽  
Author(s):  
Joshua T. Williams ◽  
Sharlene D. Newman
2015 ◽  
Vol 19 (2) ◽  
pp. 128-148 ◽  
Author(s):  
Joshua Williams ◽  
Isabelle Darcy ◽  
Sharlene Newman

AbstractLittle is known about the acquisition of another language modality on second language (L2) working memory (WM) capacity. Differential indexing within the WM system based on language modality may explain differences in performance on WM tasks in sign and spoken language. We investigated the effect of language modality (sign versus spoken) on L2 WM capacity. Results indicated reduced L2 WM span relative to first language span for both L2 learners of Spanish and American Sign Language (ASL). Importantly, ASL learners had lower L2 WM spans than Spanish learners. Additionally, ASL learners increased their L2 WM spans as a function of proficiency, whereas Spanish learners did not. This pattern of results demonstrated that acquiring another language modality disadvantages ASL learners. We posited that this disadvantage arises out of an inability to correctly and efficiently allocate linguistic information to the visuospatial sketchpad due to L1-related indexing bias.


2017 ◽  
Vol 2 ◽  
pp. 14 ◽  
Author(s):  
Marjorie Herbert ◽  
Acrisio Pires

The audiologically deaf members of the American Deaf community display bilingual competence in American Sign Language (ASL) and English, although their language acquisition trajectories often involve delayed exposure to one or both languages. There is a great deal of variation in terms of production among these signers, ranging from very ASL-typical to productions that seem to display heavy English influence. The latter, mixed productions, coined “Contact Signing” by Lucas & Valli (1992), could be representative of a type of codeswitching, referred to as ‘code-blending’ in sign language-spoken language contexts (e.g. Baker & Van den Bogaerde 2008), in which bilinguals invoke knowledge of their two grammars in concert, or these productions could be more like a mixed language, in which a third grammar, distinct from both ASL and English, constrains them. We argue, based on the analysis of our corpus of naturalistic data collected in an all-deaf sociolinguistic environment, that Contact Signing provides evidence for code-blending, given the distribution of English vs. ASL-based language properties in the production data from the participants in our study.


Target ◽  
1995 ◽  
Vol 7 (1) ◽  
pp. 135-149 ◽  
Author(s):  
William P. Isham

Abstract Research using interpreters who work with signed languages can aid us in understanding the cognitive processes of interpretation in general. Using American Sign Language (ASL) as an example, the nature of signed languages is outlined first. Then the difference between signed languages and manual codes for spoken languages is delineated, and it is argued that these two manners of communicating through the visual channel offer a unique research opportunity. Finally, an example from recent research is used to demonstrate how comparisons between spoken-language interpreters and signed-language interpreters can be used to test hypotheses regarding interpretation.


Gesture ◽  
2004 ◽  
Vol 4 (1) ◽  
pp. 75-89 ◽  
Author(s):  
David MacGregor

In analyzing the use of space in American Sign Language (ASL), Liddell (2003) argues convincingly that no account of ASL can be complete without a discussion of how linguistic signs and non-linguistic gestures and gradient phenomena work together to create meaning. This represents a departure from the assumptions of much of linguistic theory, which has attempted to describe purely linguistic phenomena as part of an autonomous system. It also raises the question of whether these phenomena are peculiar to ASL and other sign languages, or if they also apply to spoken language. In this paper, I show how Liddell’s approach can be applied to English data to provide a fuller explanation of how speakers create meaning. Specifically, I analyze Jack Lemmons’ use of space, gesture, and voice in a scene from the movie “Mr. Roberts”.


2000 ◽  
Vol 3 (1) ◽  
pp. 3-58 ◽  
Author(s):  
Theodore B. Fernald ◽  
Donna Jo Napoli

American Sign Language shares with spoken languages derivational and inflectional morphological processes, including compounding, reduplication, incorporation, and, arguably, templates. Like spoken languages, ASL also has an extensive nonderivational, noninflectional morphology involving phonological alternation although this is typically more limited. Additionally, ASL frequently associates meaning with individual phonological parameters. This association is atypical of spoken languages. We account for these phenomena by positing “ion-morphs,” which are phonologically incomplete lexical items that bond with other compatible ion-morphs. These ion-morphs draw lexical items into “families” of related signs. In contrast, ASL makes little, if any, use of concatenative affixation, a morphological mechanism common among spoken languages. We propose that this difference is the result of the comparative slowness of movement of the manual articulators as compared to the speech articulators, as well as the perceptual robustness of the manual articulators to the visual system. The slowness of the manual articulators disfavors concatenative affixation. The perceptual robustness of the manual articulators allows ASL to exploit morphological potential that spoken language can use only at considerable cost.


2018 ◽  
Vol 21 (2) ◽  
pp. 380-390
Author(s):  
Philippe Schlenker

Abstract Theories of pronominal strength (e.g., Cardinaletti & Starke 1999) lead one to expect that sign language, just like spoken language, can have morphologically distinct strong pronominals. We suggest that American Sign Language (ASL) and French Sign Language (LSF) might have such pronominals, characterized here by the fact that they may associate with only even in the absence of prosodically marked focus.


1988 ◽  
Vol 1061 (1) ◽  
pp. 351-375 ◽  
Author(s):  
Marina L. McIntire ◽  
Judy Snitzer Reilly

2019 ◽  
Vol 24 (4) ◽  
pp. 356-365 ◽  
Author(s):  
Jill P Morford ◽  
Corrine Occhino ◽  
Megan Zirnstein ◽  
Judith F Kroll ◽  
Erin Wilkinson ◽  
...  

Abstract When deaf bilinguals are asked to make semantic similarity judgments of two written words, their responses are influenced by the sublexical relationship of the signed language translations of the target words. This study investigated whether the observed effects of American Sign Language (ASL) activation on English print depend on (a) an overlap in syllabic structure of the signed translations or (b) on initialization, an effect of contact between ASL and English that has resulted in a direct representation of English orthographic features in ASL sublexical form. Results demonstrate that neither of these conditions is required or enhances effects of cross-language activation. The experimental outcomes indicate that deaf bilinguals discover the optimal mapping between their two languages in a manner that is not constrained by privileged sublexical associations.


Sign in / Sign up

Export Citation Format

Share Document