scholarly journals Sociolinguistic Variation in the Nativisation of BSL Fingerspelling

2017 ◽  
Vol 3 (1) ◽  
pp. 115-144 ◽  
Author(s):  
Matt Brown ◽  
Kearsy Cormier

Abstract British Sign Language (BSL) is a visual-gestural language distinct from spoken languages used in the United Kingdom but in contact with them. One product of this contact is the use of fingerspelling to represent English words via their orthography. Fingerspelled loans can become “nativised”, adapting manual production to conform more closely to the native lexicon’s inventory of phonemic constraints. Much of the previous literature on fingerspelling has focused on one-handed systems but, unlike the majority of sign languages, BSL uses a two-handed manual alphabet. What is the nature of nativisation in BSL, and does it exhibit sociolinguistic variation? We apply a cross-linguistic model of nativisation to BSL Corpus conversation and narrative data (http://bslcorpusproject.org) obtained from 150 signers in 6 UK regions. Mixed effects modelling is employed to determine the influence of social factors. Results show that the participants’ home region is the most significant factor, with London and Birmingham signers significantly favouring use of fully nativised fingerspelled forms. Non-nativised sequences are significantly favoured in signers of increasing age in Glasgow and Belfast. Gender and parental language background are not found to be significant factors in nativisation. The findings also suggest a form of reduction specific to London and Birmingham.

2020 ◽  
Vol 37 (4) ◽  
pp. 571-608
Author(s):  
Diane Brentari ◽  
Laura Horton ◽  
Susan Goldin-Meadow

Abstract Two differences between signed and spoken languages that have been widely discussed in the literature are: the degree to which morphology is expressed simultaneously (rather than sequentially), and the degree to which iconicity is used, particularly in predicates of motion and location, often referred to as classifier predicates. In this paper we analyze a set of properties marking agency and number in four sign languages for their crosslinguistic similarities and differences regarding simultaneity and iconicity. Data from American Sign Language (ASL), Italian Sign Language (LIS), British Sign Language (BSL), and Hong Kong Sign Language (HKSL) are analyzed. We find that iconic, cognitive, phonological, and morphological factors contribute to the distribution of these properties. We conduct two analyses—one of verbs and one of verb phrases. The analysis of classifier verbs shows that, as expected, all four languages exhibit many common formal and iconic properties in the expression of agency and number. The analysis of classifier verb phrases (VPs)—particularly, multiple-verb predicates—reveals (a) that it is grammatical in all four languages to express agency and number within a single verb, but also (b) that there is crosslinguistic variation in expressing agency and number across the four languages. We argue that this variation is motivated by how each language prioritizes, or ranks, several constraints. The rankings can be captured in Optimality Theory. Some constraints in this account, such as a constraint to be redundant, are found in all information systems and might be considered non-linguistic; however, the variation in constraint ranking in verb phrases reveals the grammatical and arbitrary nature of linguistic systems.


2016 ◽  
Vol 28 (1) ◽  
pp. 20-40 ◽  
Author(s):  
Velia Cardin ◽  
Eleni Orfanidou ◽  
Lena Kästner ◽  
Jerker Rönnberg ◽  
Bencie Woll ◽  
...  

The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production.


Gesture ◽  
2013 ◽  
Vol 13 (1) ◽  
pp. 1-27 ◽  
Author(s):  
Rachel Sutton-Spence ◽  
Donna Jo Napoli

Sign Language poetry is especially valued for its presentation of strong visual images. Here, we explore the highly visual signs that British Sign Language and American Sign Language poets create as part of the ‘classifier system’ of their languages. Signed languages, as they create visually-motivated messages, utilise categoricity (more traditionally considered ‘language’) and analogy (more traditionally considered extra-linguistic and the domain of ‘gesture’). Classifiers in sign languages arguably show both these characteristics (Oviedo, 2004). In our discussion of sign language poetry, we see that poets take elements that are widely understood to be highly visual, closely representing their referents, and make them even more highly visual — so going beyond categorisation and into new areas of analogue.


2008 ◽  
Vol 11 (1) ◽  
pp. 45-67 ◽  
Author(s):  
Onno A. Crasborn ◽  
Els van der Kooij ◽  
Dafydd Waters ◽  
Bencie Woll ◽  
Johanna Mesch

In this paper, we present a comparative study of mouth actions in three European sign languages: British Sign Language (BSL), Nederlandse Gebarentaal (Sign Language of the Netherlands, NGT), and Swedish Sign Language (SSL). We propose a typology for, and report the frequency distribution of, the different types of mouth actions observed. In accordance with previous studies, we find the three languages remarkably similar — both in the types of mouth actions they use, and in how these mouth actions are distributed. We then describe how mouth actions can extend over more than one manual sign. This spreading of mouth actions is the primary focus of this paper. Based on an analysis of comparable narrative material in the three languages, we demonstrate that the direction as well as the source and goal of spreading may be language-specific.


2015 ◽  
Vol 15 (2) ◽  
pp. 151-181 ◽  
Author(s):  
Rose Stamp ◽  
Adam Schembri ◽  
Jordan Fenlon ◽  
Ramas Rentelis

2007 ◽  
Vol 10 (2) ◽  
pp. 177-200 ◽  
Author(s):  
Jordan Fenlon ◽  
Tanya Denmark ◽  
Ruth Campbell ◽  
Bencie Woll

Linguists have suggested that non-manual and manual markers are used in sign languages to indicate prosodic and syntactic boundaries. However, little is known about how native signers interpret non-manual and manual cues with respect to sentence boundaries. Six native signers of British Sign Language (BSL) were asked to mark sentence boundaries in two narratives: one presented in BSL and one in Swedish Sign Language (SSL). For comparative analysis, non-signers undertook the same tasks. Results indicated that both native signers and non-signers were able to use visual cues effectively in segmentation and that their decisions were not dependent on knowledge of the signed language. Signed narratives contain visible cues to their prosodic structure which are available to signers and non-signers alike.


2015 ◽  
Vol 1 (1) ◽  
Author(s):  
Kearsy Cormier ◽  
Jordan Fenlon ◽  
Adam Schembri

AbstractSign languages have traditionally been described as having a distinction between (1) arbitrary (referential or syntactic) space, considered to be a purely grammatical use of space in which locations arbitrarily represent concrete or abstract subject and/or object arguments using pronouns or indicating verbs, for example, and (2) motivated (topographic or surrogate) space, involving mapping of locations of concrete referents onto the signing space via classifier constructions. Some linguists have suggested that it may be misleading to see the two uses of space as being completely distinct from one another. In this study, we use conversational data from the British Sign Language Corpus (www.bslcorpusproject.org) to look at the use of space with modified indicating verbs – specifically the directions in which these verbs are used as well as the co-occurrence of eyegaze shifts and constructed action. Our findings suggest that indicating verbs are frequently produced in conditions that use space in a motivated way and are rarely modified using arbitrary space. This contrasts with previous claims that indicating verbs in BSL prototypically use arbitrary space. We discuss the implications of this for theories about grammaticalisation and the role of gesture in sign languages and for sign language teaching.


2019 ◽  
Author(s):  
Samuel Evans ◽  
Cathy Price ◽  
Jörn Diedrichsen ◽  
Eva Gutierrez-Sigut ◽  
Mairéad MacSweeney

AbstractDo different languages evoke different conceptual representations? If so, greatest divergence might be expected between languages that differ most in structure, such as sign and speech. Unlike speech bilinguals, hearing sign-speech bilinguals use languages conveyed in different modalities. We used functional magnetic resonance imaging and representational similarity analysis (RSA) to quantify the similarity of semantic representations elicited by the same concepts presented in spoken British English and British Sign Language in hearing, early sign-speech bilinguals. We found shared representations for semantic categories in left posterior middle and inferior temporal cortex. Despite shared category representations, the same spoken words and signs did not elicit similar neural patterns. Thus, contrary to previous univariate activation-based analyses of speech and sign perception, we show that semantic representations evoked by speech and sign are only partially shared. This demonstrates the unique perspective that sign languages and RSA provide in understanding how language influences conceptual representation.


2020 ◽  
pp. 026765832090685
Author(s):  
Sannah Gulamani ◽  
Chloë Marshall ◽  
Gary Morgan

Little is known about how hearing adults learn sign languages. Our objective in this study was to investigate how learners of British Sign Language (BSL) produce narratives, and we focused in particular on viewpoint-taking. Twenty-three intermediate-level learners of BSL and 10 deaf native/early signers produced a narrative in BSL using the wordless picture book Frog, where are you? (Mayer, 1969). We selected specific episodes from part of the book that provided rich opportunities for shifting between different characters and taking on different viewpoints. We coded for details of story content, the frequency with which different viewpoints were used and how long those viewpoints were used for, and the numbers of articulators that were used simultaneously. We found that even though learners’ and deaf signers’ narratives did not differ in overall duration, learners’ narratives had less content. Learners used character viewpoint less frequently than deaf signers. Although learners spent just as long as deaf signers in character viewpoint, they spent longer than deaf signers in observer viewpoint. Together, these findings suggest that character viewpoint was harder than observer viewpoint for learners. Furthermore, learners were less skilled than deaf signers in using multiple articulators simultaneously. We conclude that challenges for learners of sign include taking character viewpoint when narrating a story and encoding information across multiple articulators simultaneously.


Sign in / Sign up

Export Citation Format

Share Document