The Cortical Organization of Syntactic Processing Is Supramodal: Evidence from American Sign Language

2021 ◽  
pp. 1-12
Author(s):  
William Matchin ◽  
Deniz İlkbaşaran ◽  
Marla Hatrak ◽  
Austin Roth ◽  
Agnes Villwock ◽  
...  

Abstract Areas within the left-lateralized neural network for language have been found to be sensitive to syntactic complexity in spoken and written language. Previous research has revealed that these areas are active for sign language as well, but whether these areas are specifically responsive to syntactic complexity in sign language independent of lexical processing has yet to be found. To investigate the question, we used fMRI to neuroimage deaf native signers' comprehension of 180 sign strings in American Sign Language (ASL) with a picture-probe recognition task. The ASL strings were all six signs in length but varied at three levels of syntactic complexity: sign lists, two-word sentences, and complex sentences. Syntactic complexity significantly affected comprehension and memory, both behaviorally and neurally, by facilitating accuracy and response time on the picture-probe recognition task and eliciting a left lateralized activation response pattern in anterior and posterior superior temporal sulcus (aSTS and pSTS). Minimal or absent syntactic structure reduced picture-probe recognition and elicited activation in bilateral pSTS and occipital-temporal cortex. These results provide evidence from a sign language, ASL, that the combinatorial processing of anterior STS and pSTS is supramodal in nature. The results further suggest that the neurolinguistic processing of ASL is characterized by overlapping and separable neural systems for syntactic and lexical processing.

2021 ◽  
Vol 7 (2) ◽  
pp. 156-171
Author(s):  
Ilaria Berteletti ◽  
SaraBeth J. Sullivan ◽  
Lucas Lancaster

With two simple experiments we investigate the overlooked influence of handshape similarity for processing numerical information conveyed on the hands. In most finger-counting sequences there is a tight relationship between the number of fingers raised and the numerical value represented. This creates a possible confound where numbers closer to each other are also represented by handshapes that are more similar. By using the American Sign Language (ASL) number signs we are able to dissociate between the two variables orthogonally. First, we test the effect of handshape similarity in a same/different judgment task in a group of hearing non-signers and then test the interference of handshape in a number judgment task in a group of native ASL signers. Our results show an effect of handshape similarity and its interaction with numerical value even in the group of native signers for whom these handshapes are linguistic symbols and not a learning tool for acquiring numerical concepts. Because prior studies have never considered handshape similarity, these results open new directions for understanding the relationship between finger-based counting, internal hand representations and numerical proficiency.


Gesture ◽  
2016 ◽  
Vol 15 (3) ◽  
pp. 291-305 ◽  
Author(s):  
David P. Corina ◽  
Eva Gutierrez

Little is known about how individual signs that occur in naturally produced signed languages are recognized. Here we examine whether sign understanding may be grounded in sensorimotor properties by evaluating a signer’s ability to make lexical decisions to American Sign Language (ASL) signs that are articulated either congruent with or incongruent with the observer’s own handedness. Our results show little evidence for handedness congruency effects for native signers’ perception of ASL, however handedness congruency effects were seen in non-native late learners of ASL and hearing ASL-English bilinguals. The data are compatible with a theory of sign recognition that makes reference to internally simulated articulatory control signals — a forward model based upon sensory-motor properties of one’s owns body. The data suggest that sign recognition may rely upon an internal body schema when processing is non-optimal as a result of having learned ASL later in life. Native signers however may have developed representations of signs which are less bound to the hand with which it is performed, suggesting a different engagement of an internal forward model for rapid lexical decisions.


2012 ◽  
Vol 15 (2) ◽  
pp. 402-412 ◽  
Author(s):  
DIANE BRENTARI ◽  
MARIE A. NADOLSKE ◽  
GEORGE WOLFORD

In this paper the prosodic structure of American Sign Language (ASL) narratives is analyzed in deaf native signers (L1-D), hearing native signers (L1-H), and highly proficient hearing second language signers (L2-H). The results of this study show that the prosodic patterns used by these groups are associated both with their ASL language experience (L1 or L2) and with their hearing status (deaf or hearing), suggesting that experience using co-speech gesture (i.e. gesturing while speaking) may have some effect on the prosodic cues used by hearing signers, similar to the effects of the prosodic structure of an L1 on an L2.


2011 ◽  
Vol 14 (1) ◽  
pp. 94-114 ◽  
Author(s):  
Donna Lewin ◽  
Adam C. Schembri

This article investigates the claim that tongue protrusion (‘th’) acts as a nonmanual adverbial morpheme in British Sign Language (BSL) (Brennan 1992; Sutton-Spence & Woll 1999) drawing on narrative data produced by two deaf native signers as part of the European Cultural Heritage Online (ECHO) corpus. Data from ten BSL narratives have been analysed to observe the frequency and form of tongue protrusion. The results from this preliminary investigation indicate tongue protrusion occurs as part of the phonological formation of lexical signs (i.e., ‘echo phonology’, see Woll 2001), as well as a separate meaningful unit that co-occurs (sometimes as part of constructed action) with classifier constructions and lexical verb signs. In the latter cases, the results suggest ‘th’ sometimes appears to function as an adverbial morpheme in BSL, but with a greater variety of meanings than previously suggested in the BSL literature. One use of the adverbial appears similar to a nonmanual signal in American Sign Language described by Liddell (1980), although the form of the mouth gesture in our BSL data differs from what is reported in Liddell’s work. Thus, these findings suggest the mouth gesture ‘th’ in BSL has a broad range of functions. Some uses of tongue protrusion, however, remain difficult to categorise and further research with a larger dataset is needed.


2019 ◽  
Author(s):  
Karen Emmorey

We investigated linguistic codability for sensory information (colour, taste, shape, touch, taste, smell, and sound) and the use of iconic labels in American Sign Language (ASL) by deaf native signers. Colour was highly codable in ASL, but few iconic labels were produced. Shape labels were highly iconic (lexical signs and classifier constructions), and touch descriptions relied on iconic classifier constructions that depicted the shape of the tactile source object. Lexical taste-specific signs also exhibited iconic properties (articulated near the mouth), but taste codability was relatively low. No smell-specific lexical signs were elicited (all descriptions were source-based). Descriptions of sound stimuli were elicited through tactile vibrations and were often described using classifier constructions that visually depicted different sound qualities. Results indicated that iconicity of linguistic forms was not constant across the senses; rather, iconicity was most frequently observed for shape, touch, and sound stimuli, and least frequently for colour and smell.


2013 ◽  
Vol 25 (4) ◽  
pp. 517-533 ◽  
Author(s):  
Karen Emmorey ◽  
Stephen McCullough ◽  
Sonya Mehta ◽  
Laura L. B. Ponto ◽  
Thomas J. Grabowski

Biological differences between signed and spoken languages may be most evident in the expression of spatial information. PET was used to investigate the neural substrates supporting the production of spatial language in American Sign Language as expressed by classifier constructions, in which handshape indicates object type and the location/motion of the hand iconically depicts the location/motion of a referent object. Deaf native signers performed a picture description task in which they overtly named objects or produced classifier constructions that varied in location, motion, or object type. In contrast to the expression of location and motion, the production of both lexical signs and object type classifier morphemes engaged left inferior frontal cortex and left inferior temporal cortex, supporting the hypothesis that unlike the location and motion components of a classifier construction, classifier handshapes are categorical morphemes that are retrieved via left hemisphere language regions. In addition, lexical signs engaged the anterior temporal lobes to a greater extent than classifier constructions, which we suggest reflects increased semantic processing required to name individual objects compared with simply indicating the type of object. Both location and motion classifier constructions engaged bilateral superior parietal cortex, with some evidence that the expression of static locations differentially engaged the left intraparietal sulcus. We argue that bilateral parietal activation reflects the biological underpinnings of sign language. To express spatial information, signers must transform visual–spatial representations into a body-centered reference frame and reach toward target locations within signing space.


Sign in / Sign up

Export Citation Format

Share Document