scholarly journals Children’s encoding of simultaneity in British Sign Language narratives

2002 ◽  
Vol 5 (2) ◽  
pp. 131-165 ◽  
Author(s):  
Gary Morgan

Narrative discourse in BSL is first analyzed in an adult signer by describing how fixed and shifted sign space is used for reference and the encoding of simultaneity. Although children as young as 4 years old use parts of these sign spaces in isolation their combined use in encoding simultaneity in narrative is a major hurdle to achieving full mastery of British Sign Language (BSL). The paper describes the developmental trends in encoding simultaneity in BSL ‘frog story’ narratives from a group of 12 signing children, aged 4; 3 to 13; 4. We focus on the gradual control of reference in sign space. A transcription framework for recording this aspect of sign discourse is also outlined. The results point away from the role of iconicity and instead toward general patterns in narrative development as driving the organization of sign space and reference.

2015 ◽  
Vol 1 (1) ◽  
Author(s):  
Kearsy Cormier ◽  
Jordan Fenlon ◽  
Adam Schembri

AbstractSign languages have traditionally been described as having a distinction between (1) arbitrary (referential or syntactic) space, considered to be a purely grammatical use of space in which locations arbitrarily represent concrete or abstract subject and/or object arguments using pronouns or indicating verbs, for example, and (2) motivated (topographic or surrogate) space, involving mapping of locations of concrete referents onto the signing space via classifier constructions. Some linguists have suggested that it may be misleading to see the two uses of space as being completely distinct from one another. In this study, we use conversational data from the British Sign Language Corpus (www.bslcorpusproject.org) to look at the use of space with modified indicating verbs – specifically the directions in which these verbs are used as well as the co-occurrence of eyegaze shifts and constructed action. Our findings suggest that indicating verbs are frequently produced in conditions that use space in a motivated way and are rarely modified using arbitrary space. This contrasts with previous claims that indicating verbs in BSL prototypically use arbitrary space. We discuss the implications of this for theories about grammaticalisation and the role of gesture in sign languages and for sign language teaching.


2013 ◽  
Vol 5 (4) ◽  
pp. 313-343 ◽  
Author(s):  
Helen Earis ◽  
Kearsy Cormier

AbstractThis paper discusses how point of view (POV) is expressed in British Sign Language (BSL) and spoken English narrative discourse. Spoken languages can mark changes in POV using strategies such as direct/indirect discourse, whereas signed languages can mark changes in POV in a unique way using “role shift”. Role shift is where the signer “becomes” a referent by taking on attributes of that referent, e.g. facial expression. In this study, two native BSL users and two native British English speakers were asked to tell the story “The Tortoise and the Hare”. The data were then compared to see how point of view is expressed and maintained in both languages. The results indicated that the spoken English users preferred the narrator's perspective, whereas the BSL users preferred a character's perspective. This suggests that spoken and signed language users may structure stories in different ways. However, some co-speech gestures and facial expressions used in the spoken English stories to denote characters' thoughts and feelings bear resemblance to the hand movements and facial expressions used by the BSL storytellers. This suggests that while approaches to storytelling may differ, both languages share some gestural resources which manifest themselves in different ways across different modalities.


2020 ◽  
Vol 37 (4) ◽  
pp. 571-608
Author(s):  
Diane Brentari ◽  
Laura Horton ◽  
Susan Goldin-Meadow

Abstract Two differences between signed and spoken languages that have been widely discussed in the literature are: the degree to which morphology is expressed simultaneously (rather than sequentially), and the degree to which iconicity is used, particularly in predicates of motion and location, often referred to as classifier predicates. In this paper we analyze a set of properties marking agency and number in four sign languages for their crosslinguistic similarities and differences regarding simultaneity and iconicity. Data from American Sign Language (ASL), Italian Sign Language (LIS), British Sign Language (BSL), and Hong Kong Sign Language (HKSL) are analyzed. We find that iconic, cognitive, phonological, and morphological factors contribute to the distribution of these properties. We conduct two analyses—one of verbs and one of verb phrases. The analysis of classifier verbs shows that, as expected, all four languages exhibit many common formal and iconic properties in the expression of agency and number. The analysis of classifier verb phrases (VPs)—particularly, multiple-verb predicates—reveals (a) that it is grammatical in all four languages to express agency and number within a single verb, but also (b) that there is crosslinguistic variation in expressing agency and number across the four languages. We argue that this variation is motivated by how each language prioritizes, or ranks, several constraints. The rankings can be captured in Optimality Theory. Some constraints in this account, such as a constraint to be redundant, are found in all information systems and might be considered non-linguistic; however, the variation in constraint ranking in verb phrases reveals the grammatical and arbitrary nature of linguistic systems.


1982 ◽  
Vol 1031 (1) ◽  
pp. 155-178
Author(s):  
James G. Kyle ◽  
Bencie Woll ◽  
Peter Llewellyn-Jones

2016 ◽  
Vol 28 (1) ◽  
pp. 20-40 ◽  
Author(s):  
Velia Cardin ◽  
Eleni Orfanidou ◽  
Lena Kästner ◽  
Jerker Rönnberg ◽  
Bencie Woll ◽  
...  

The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production.


Author(s):  
Joanna Atkinson ◽  
Tanya Denmark ◽  
Jane Marshall ◽  
Cath Mummery ◽  
Bencie Woll

Sign in / Sign up

Export Citation Format

Share Document