scholarly journals Hand and Mouth: Cortical Correlates of Lexical Processing in British Sign Language and Speechreading English

2008 ◽  
Vol 20 (7) ◽  
pp. 1220-1234 ◽  
Author(s):  
Cheryl M. Capek ◽  
Dafydd Waters ◽  
Bencie Woll ◽  
Mairéad MacSweeney ◽  
Michael J. Brammer ◽  
...  

Spoken languages use one set of articulators—the vocal tract, whereas signed languages use multiple articulators, including both manual and facial actions. How sensitive are the cortical circuits for language processing to the particular articulators that are observed? This question can only be addressed with participants who use both speech and a signed language. In this study, we used functional magnetic resonance imaging to compare the processing of speechreading and sign processing in deaf native signers of British Sign Language (BSL) who were also proficient speechreaders. The following questions were addressed: To what extent do these different language types rely on a common brain network? To what extent do the patterns of activation differ? How are these networks affected by the articulators that languages use? Common peri-sylvian regions were activated both for speechreading English words and for BSL signs. Distinctive activation was also observed reflecting the language form. Speechreading elicited greater activation in the left mid-superior temporal cortex than BSL, whereas BSL processing generated greater activation at the temporo-parieto-occipital junction in both hemispheres. We probed this distinction further within BSL, where manual signs can be accompanied by different types of mouth action. BSL signs with speech-like mouth actions showed greater superior temporal activation, whereas signs made with non-speech-like mouth actions showed more activation in posterior and inferior temporal regions. Distinct regions within the temporal cortex are not only differentially sensitive to perception of the distinctive articulators for speech and for sign but also show sensitivity to the different articulators within the (signed) language.

2007 ◽  
Vol 10 (2) ◽  
pp. 177-200 ◽  
Author(s):  
Jordan Fenlon ◽  
Tanya Denmark ◽  
Ruth Campbell ◽  
Bencie Woll

Linguists have suggested that non-manual and manual markers are used in sign languages to indicate prosodic and syntactic boundaries. However, little is known about how native signers interpret non-manual and manual cues with respect to sentence boundaries. Six native signers of British Sign Language (BSL) were asked to mark sentence boundaries in two narratives: one presented in BSL and one in Swedish Sign Language (SSL). For comparative analysis, non-signers undertook the same tasks. Results indicated that both native signers and non-signers were able to use visual cues effectively in segmentation and that their decisions were not dependent on knowledge of the signed language. Signed narratives contain visible cues to their prosodic structure which are available to signers and non-signers alike.


2016 ◽  
Vol 28 (1) ◽  
pp. 20-40 ◽  
Author(s):  
Velia Cardin ◽  
Eleni Orfanidou ◽  
Lena Kästner ◽  
Jerker Rönnberg ◽  
Bencie Woll ◽  
...  

The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production.


2021 ◽  
pp. 1-12
Author(s):  
William Matchin ◽  
Deniz İlkbaşaran ◽  
Marla Hatrak ◽  
Austin Roth ◽  
Agnes Villwock ◽  
...  

Abstract Areas within the left-lateralized neural network for language have been found to be sensitive to syntactic complexity in spoken and written language. Previous research has revealed that these areas are active for sign language as well, but whether these areas are specifically responsive to syntactic complexity in sign language independent of lexical processing has yet to be found. To investigate the question, we used fMRI to neuroimage deaf native signers' comprehension of 180 sign strings in American Sign Language (ASL) with a picture-probe recognition task. The ASL strings were all six signs in length but varied at three levels of syntactic complexity: sign lists, two-word sentences, and complex sentences. Syntactic complexity significantly affected comprehension and memory, both behaviorally and neurally, by facilitating accuracy and response time on the picture-probe recognition task and eliciting a left lateralized activation response pattern in anterior and posterior superior temporal sulcus (aSTS and pSTS). Minimal or absent syntactic structure reduced picture-probe recognition and elicited activation in bilateral pSTS and occipital-temporal cortex. These results provide evidence from a sign language, ASL, that the combinatorial processing of anterior STS and pSTS is supramodal in nature. The results further suggest that the neurolinguistic processing of ASL is characterized by overlapping and separable neural systems for syntactic and lexical processing.


2020 ◽  
pp. 026553222092459
Author(s):  
Justyna Kotowicz ◽  
Bencie Woll ◽  
Rosalind Herman

The evaluation of sign language proficiency needs to be based on measures with well-established psychometric proprieties. To date, no valid and reliable test is available to assess Polish Sign Language ( Polski Język Migowy, PJM) skills in deaf children. Hence, our aim with this study was to adapt the British Sign Language Receptive Skills Test (the first standardized test to determine sign language proficiency in children) into PJM, a less researched sign language. In this paper, we present the first steps in the adaptation process and highlight linguistic and cultural similarities and differences between the British Sign Language Receptive Skills Test and the PJM adaptation. We collected data from 20 deaf children who were native signers (age range: 6 to 12) and 30 deaf children who were late learners of PJM (age range: 6 to 13). Preliminary analyses showed that the PJM Receptive Skills Test has acceptable psychometric characteristics (item analysis, validity, reliability, and sensitivity to age). Our long-term goal with this work was to include younger children (age range: 3 to 6) and to standardize the PJM Receptive Skills Tests, so that it can be used in educational settings and in scientific research.


2002 ◽  
Vol 14 (7) ◽  
pp. 1064-1075 ◽  
Author(s):  
Mairéad MacSweeney ◽  
Bencie Woll ◽  
Ruth Campbell ◽  
Gemma A. Calvert ◽  
Philip K. McGuire ◽  
...  

In all signed languages used by deaf people, signs are executed in “sign space” in front of the body. Some signed sentences use this space to map detailed “real-world” spatial relationships directly. Such sentences can be considered to exploit sign space “topographically.” Using functional magnetic resonance imaging, we explored the extent to which increasing the topographic processing demands of signed sentences was reflected in the differential recruitment of brain regions in deaf and hearing native signers of the British Sign Language. When BSL signers performed a sentence anomaly judgement task, the occipito-temporal junction was activated bilaterally to a greater extent for topographic than nontopo-graphic processing. The differential role of movement in the processing of the two sentence types may account for this finding. In addition, enhanced activation was observed in the left inferior and superior parietal lobules during processing of topographic BSL sentences. We argue that the left parietal lobe is specifically involved in processing the precise configuration and location of hands in space to represent objects, agents, and actions. Importantly, no differences in these regions were observed when hearing people heard and saw English translations of these sentences. Despite the high degree of similarity in the neural systems underlying signed and spoken languages, exploring the linguistic features which are unique to each of these broadens our understanding of the systems involved in language comprehension.


2011 ◽  
Vol 14 (1) ◽  
pp. 94-114 ◽  
Author(s):  
Donna Lewin ◽  
Adam C. Schembri

This article investigates the claim that tongue protrusion (‘th’) acts as a nonmanual adverbial morpheme in British Sign Language (BSL) (Brennan 1992; Sutton-Spence & Woll 1999) drawing on narrative data produced by two deaf native signers as part of the European Cultural Heritage Online (ECHO) corpus. Data from ten BSL narratives have been analysed to observe the frequency and form of tongue protrusion. The results from this preliminary investigation indicate tongue protrusion occurs as part of the phonological formation of lexical signs (i.e., ‘echo phonology’, see Woll 2001), as well as a separate meaningful unit that co-occurs (sometimes as part of constructed action) with classifier constructions and lexical verb signs. In the latter cases, the results suggest ‘th’ sometimes appears to function as an adverbial morpheme in BSL, but with a greater variety of meanings than previously suggested in the BSL literature. One use of the adverbial appears similar to a nonmanual signal in American Sign Language described by Liddell (1980), although the form of the mouth gesture in our BSL data differs from what is reported in Liddell’s work. Thus, these findings suggest the mouth gesture ‘th’ in BSL has a broad range of functions. Some uses of tongue protrusion, however, remain difficult to categorise and further research with a larger dataset is needed.


2013 ◽  
Vol 5 (4) ◽  
pp. 313-343 ◽  
Author(s):  
Helen Earis ◽  
Kearsy Cormier

AbstractThis paper discusses how point of view (POV) is expressed in British Sign Language (BSL) and spoken English narrative discourse. Spoken languages can mark changes in POV using strategies such as direct/indirect discourse, whereas signed languages can mark changes in POV in a unique way using “role shift”. Role shift is where the signer “becomes” a referent by taking on attributes of that referent, e.g. facial expression. In this study, two native BSL users and two native British English speakers were asked to tell the story “The Tortoise and the Hare”. The data were then compared to see how point of view is expressed and maintained in both languages. The results indicated that the spoken English users preferred the narrator's perspective, whereas the BSL users preferred a character's perspective. This suggests that spoken and signed language users may structure stories in different ways. However, some co-speech gestures and facial expressions used in the spoken English stories to denote characters' thoughts and feelings bear resemblance to the hand movements and facial expressions used by the BSL storytellers. This suggests that while approaches to storytelling may differ, both languages share some gestural resources which manifest themselves in different ways across different modalities.


2020 ◽  
Vol 37 (4) ◽  
pp. 571-608
Author(s):  
Diane Brentari ◽  
Laura Horton ◽  
Susan Goldin-Meadow

Abstract Two differences between signed and spoken languages that have been widely discussed in the literature are: the degree to which morphology is expressed simultaneously (rather than sequentially), and the degree to which iconicity is used, particularly in predicates of motion and location, often referred to as classifier predicates. In this paper we analyze a set of properties marking agency and number in four sign languages for their crosslinguistic similarities and differences regarding simultaneity and iconicity. Data from American Sign Language (ASL), Italian Sign Language (LIS), British Sign Language (BSL), and Hong Kong Sign Language (HKSL) are analyzed. We find that iconic, cognitive, phonological, and morphological factors contribute to the distribution of these properties. We conduct two analyses—one of verbs and one of verb phrases. The analysis of classifier verbs shows that, as expected, all four languages exhibit many common formal and iconic properties in the expression of agency and number. The analysis of classifier verb phrases (VPs)—particularly, multiple-verb predicates—reveals (a) that it is grammatical in all four languages to express agency and number within a single verb, but also (b) that there is crosslinguistic variation in expressing agency and number across the four languages. We argue that this variation is motivated by how each language prioritizes, or ranks, several constraints. The rankings can be captured in Optimality Theory. Some constraints in this account, such as a constraint to be redundant, are found in all information systems and might be considered non-linguistic; however, the variation in constraint ranking in verb phrases reveals the grammatical and arbitrary nature of linguistic systems.


Sign in / Sign up

Export Citation Format

Share Document