scholarly journals The Biology of Linguistic Expression Impacts Neural Correlates for Spatial Language

2013 ◽  
Vol 25 (4) ◽  
pp. 517-533 ◽  
Author(s):  
Karen Emmorey ◽  
Stephen McCullough ◽  
Sonya Mehta ◽  
Laura L. B. Ponto ◽  
Thomas J. Grabowski

Biological differences between signed and spoken languages may be most evident in the expression of spatial information. PET was used to investigate the neural substrates supporting the production of spatial language in American Sign Language as expressed by classifier constructions, in which handshape indicates object type and the location/motion of the hand iconically depicts the location/motion of a referent object. Deaf native signers performed a picture description task in which they overtly named objects or produced classifier constructions that varied in location, motion, or object type. In contrast to the expression of location and motion, the production of both lexical signs and object type classifier morphemes engaged left inferior frontal cortex and left inferior temporal cortex, supporting the hypothesis that unlike the location and motion components of a classifier construction, classifier handshapes are categorical morphemes that are retrieved via left hemisphere language regions. In addition, lexical signs engaged the anterior temporal lobes to a greater extent than classifier constructions, which we suggest reflects increased semantic processing required to name individual objects compared with simply indicating the type of object. Both location and motion classifier constructions engaged bilateral superior parietal cortex, with some evidence that the expression of static locations differentially engaged the left intraparietal sulcus. We argue that bilateral parietal activation reflects the biological underpinnings of sign language. To express spatial information, signers must transform visual–spatial representations into a body-centered reference frame and reach toward target locations within signing space.

2021 ◽  
Author(s):  
Norma-Jane E. Thompson

Currently, the World Wide Web allows web pages to be produced in most written languages. Many deaf people, however, use a visual-spatial language with no written equivalent (e.g. American Sign Language). SignLink Studio, a software tool for designing sign language web pages, allows for hyperlinking within video clips so that sign language only web pages can be created. However, this tool does not allow for other interactive elements such as online forms. In this thesis, a model for an online sign language form is proposed and evaluated. A study consisting of 22 participants was conducted to examine whether there were differences in performance of preferences between sign language forms and text forms, and between two presentation styles (all-at-once versus one-at-a-time). The results showed that there was no clear performance advantage between sign language and text; however, participants were interested in having online questions presented in sign language. Also, there were no advantages in performance or preferences between presentation styles.


2021 ◽  
pp. 1-12
Author(s):  
William Matchin ◽  
Deniz İlkbaşaran ◽  
Marla Hatrak ◽  
Austin Roth ◽  
Agnes Villwock ◽  
...  

Abstract Areas within the left-lateralized neural network for language have been found to be sensitive to syntactic complexity in spoken and written language. Previous research has revealed that these areas are active for sign language as well, but whether these areas are specifically responsive to syntactic complexity in sign language independent of lexical processing has yet to be found. To investigate the question, we used fMRI to neuroimage deaf native signers' comprehension of 180 sign strings in American Sign Language (ASL) with a picture-probe recognition task. The ASL strings were all six signs in length but varied at three levels of syntactic complexity: sign lists, two-word sentences, and complex sentences. Syntactic complexity significantly affected comprehension and memory, both behaviorally and neurally, by facilitating accuracy and response time on the picture-probe recognition task and eliciting a left lateralized activation response pattern in anterior and posterior superior temporal sulcus (aSTS and pSTS). Minimal or absent syntactic structure reduced picture-probe recognition and elicited activation in bilateral pSTS and occipital-temporal cortex. These results provide evidence from a sign language, ASL, that the combinatorial processing of anterior STS and pSTS is supramodal in nature. The results further suggest that the neurolinguistic processing of ASL is characterized by overlapping and separable neural systems for syntactic and lexical processing.


2021 ◽  
Author(s):  
Norma-Jane E. Thompson

Currently, the World Wide Web allows web pages to be produced in most written languages. Many deaf people, however, use a visual-spatial language with no written equivalent (e.g. American Sign Language). SignLink Studio, a software tool for designing sign language web pages, allows for hyperlinking within video clips so that sign language only web pages can be created. However, this tool does not allow for other interactive elements such as online forms. In this thesis, a model for an online sign language form is proposed and evaluated. A study consisting of 22 participants was conducted to examine whether there were differences in performance of preferences between sign language forms and text forms, and between two presentation styles (all-at-once versus one-at-a-time). The results showed that there was no clear performance advantage between sign language and text; however, participants were interested in having online questions presented in sign language. Also, there were no advantages in performance or preferences between presentation styles.


2018 ◽  
Vol 39 (5) ◽  
pp. 961-987 ◽  
Author(s):  
ZED SEVCIKOVA SEHYR ◽  
BRENDA NICODEMUS ◽  
JENNIFER PETRICH ◽  
KAREN EMMOREY

ABSTRACTAmerican Sign Language (ASL) and English differ in linguistic resources available to express visual–spatial information. In a referential communication task, we examined the effect of language modality on the creation and mutual acceptance of reference to non-nameable figures. In both languages, description times reduced over iterations and references to the figures’ geometric properties (“shape-based reference”) declined over time in favor of expressions describing the figures’ resemblance to nameable objects (“analogy-based reference”). ASL signers maintained a preference for shape-based reference until the final (sixth) round, while English speakers transitioned toward analogy-based reference by Round 3. Analogy-based references were more time efficient (associated with shorter round description times). Round completion times were longer for ASL than for English, possibly due to gaze demands of the task and/or to more shape-based descriptions. Signers’ referring expressions remained unaffected by figure complexity while speakers preferred analogy-based expressions for complex figures and shape-based expressions for simple figures. Like speech, co-speech gestures decreased over iterations. Gestures primarily accompanied shape-based references, but listeners rarely looked at these gestures, suggesting that they were recruited to aid the speaker rather than the addressee. Overall, different linguistic resources (classifier constructions vs. geometric vocabulary) imposed distinct demands on referring strategies in ASL and English.


2011 ◽  
Author(s):  
M. Leonard ◽  
N. Ferjan Ramirez ◽  
C. Torres ◽  
M. Hatrak ◽  
R. Mayberry ◽  
...  

2021 ◽  
Vol 7 (2) ◽  
pp. 156-171
Author(s):  
Ilaria Berteletti ◽  
SaraBeth J. Sullivan ◽  
Lucas Lancaster

With two simple experiments we investigate the overlooked influence of handshape similarity for processing numerical information conveyed on the hands. In most finger-counting sequences there is a tight relationship between the number of fingers raised and the numerical value represented. This creates a possible confound where numbers closer to each other are also represented by handshapes that are more similar. By using the American Sign Language (ASL) number signs we are able to dissociate between the two variables orthogonally. First, we test the effect of handshape similarity in a same/different judgment task in a group of hearing non-signers and then test the interference of handshape in a number judgment task in a group of native ASL signers. Our results show an effect of handshape similarity and its interaction with numerical value even in the group of native signers for whom these handshapes are linguistic symbols and not a learning tool for acquiring numerical concepts. Because prior studies have never considered handshape similarity, these results open new directions for understanding the relationship between finger-based counting, internal hand representations and numerical proficiency.


Gesture ◽  
2016 ◽  
Vol 15 (3) ◽  
pp. 291-305 ◽  
Author(s):  
David P. Corina ◽  
Eva Gutierrez

Little is known about how individual signs that occur in naturally produced signed languages are recognized. Here we examine whether sign understanding may be grounded in sensorimotor properties by evaluating a signer’s ability to make lexical decisions to American Sign Language (ASL) signs that are articulated either congruent with or incongruent with the observer’s own handedness. Our results show little evidence for handedness congruency effects for native signers’ perception of ASL, however handedness congruency effects were seen in non-native late learners of ASL and hearing ASL-English bilinguals. The data are compatible with a theory of sign recognition that makes reference to internally simulated articulatory control signals — a forward model based upon sensory-motor properties of one’s owns body. The data suggest that sign recognition may rely upon an internal body schema when processing is non-optimal as a result of having learned ASL later in life. Native signers however may have developed representations of signs which are less bound to the hand with which it is performed, suggesting a different engagement of an internal forward model for rapid lexical decisions.


2012 ◽  
Vol 15 (2) ◽  
pp. 402-412 ◽  
Author(s):  
DIANE BRENTARI ◽  
MARIE A. NADOLSKE ◽  
GEORGE WOLFORD

In this paper the prosodic structure of American Sign Language (ASL) narratives is analyzed in deaf native signers (L1-D), hearing native signers (L1-H), and highly proficient hearing second language signers (L2-H). The results of this study show that the prosodic patterns used by these groups are associated both with their ASL language experience (L1 or L2) and with their hearing status (deaf or hearing), suggesting that experience using co-speech gesture (i.e. gesturing while speaking) may have some effect on the prosodic cues used by hearing signers, similar to the effects of the prosodic structure of an L1 on an L2.


2011 ◽  
Vol 14 (1) ◽  
pp. 94-114 ◽  
Author(s):  
Donna Lewin ◽  
Adam C. Schembri

This article investigates the claim that tongue protrusion (‘th’) acts as a nonmanual adverbial morpheme in British Sign Language (BSL) (Brennan 1992; Sutton-Spence & Woll 1999) drawing on narrative data produced by two deaf native signers as part of the European Cultural Heritage Online (ECHO) corpus. Data from ten BSL narratives have been analysed to observe the frequency and form of tongue protrusion. The results from this preliminary investigation indicate tongue protrusion occurs as part of the phonological formation of lexical signs (i.e., ‘echo phonology’, see Woll 2001), as well as a separate meaningful unit that co-occurs (sometimes as part of constructed action) with classifier constructions and lexical verb signs. In the latter cases, the results suggest ‘th’ sometimes appears to function as an adverbial morpheme in BSL, but with a greater variety of meanings than previously suggested in the BSL literature. One use of the adverbial appears similar to a nonmanual signal in American Sign Language described by Liddell (1980), although the form of the mouth gesture in our BSL data differs from what is reported in Liddell’s work. Thus, these findings suggest the mouth gesture ‘th’ in BSL has a broad range of functions. Some uses of tongue protrusion, however, remain difficult to categorise and further research with a larger dataset is needed.


2001 ◽  
Vol 13 (6) ◽  
pp. 754-765 ◽  
Author(s):  
A. L. Giraud ◽  
C. J. Price

Several previous functional imaging experiments have demonstrated that auditory presentation of speech, relative to tones or scrambled speech, activate the superior temporal sulci (STS) bilaterally. In this study, we attempted to segregate the neural responses to phonological, lexical, and semantic input by contrasting activation elicited by heard words, meaningless syllables, and environmental sounds. Inevitable differences between the duration and amplitude of each stimulus type were controlled with auditory noise bursts matched to each activation stimulus. Half the subjects were instructed to say “okay” in response to presentation of all stimuli. The other half repeated back the words and syllables, named the source of the sounds, and said “okay” to the control stimuli (noise bursts). We looked for stimulus effects that were consistent across task. The results revealed that central regions in the STS were equally responsive to speech (words and syllables) and familiar sounds, whereas the posterior and anterior regions of the left superior temporal gyrus were more active for speech. The effect of semantic input was small but revealed more activation in the inferior temporal cortex for words and familiar sounds than syllables and noise. In addition, words (relative to syllables, sounds, and noise) enhanced activation in the temporo-parietal areas that have previously been linked to modality independent semantic processing. Thus, in cognitive terms, we dissociate phono-logical (speech) and semantic responses and propose that word specificity arises from functional integration among shared phonological and semantic areas.


Sign in / Sign up

Export Citation Format

Share Document