Frequency distribution and spreading behavior of different types of mouth actions in three sign languages

2008 ◽  
Vol 11 (1) ◽  
pp. 45-67 ◽  
Author(s):  
Onno A. Crasborn ◽  
Els van der Kooij ◽  
Dafydd Waters ◽  
Bencie Woll ◽  
Johanna Mesch

In this paper, we present a comparative study of mouth actions in three European sign languages: British Sign Language (BSL), Nederlandse Gebarentaal (Sign Language of the Netherlands, NGT), and Swedish Sign Language (SSL). We propose a typology for, and report the frequency distribution of, the different types of mouth actions observed. In accordance with previous studies, we find the three languages remarkably similar — both in the types of mouth actions they use, and in how these mouth actions are distributed. We then describe how mouth actions can extend over more than one manual sign. This spreading of mouth actions is the primary focus of this paper. Based on an analysis of comparable narrative material in the three languages, we demonstrate that the direction as well as the source and goal of spreading may be language-specific.

2020 ◽  
Vol 37 (4) ◽  
pp. 571-608
Author(s):  
Diane Brentari ◽  
Laura Horton ◽  
Susan Goldin-Meadow

Abstract Two differences between signed and spoken languages that have been widely discussed in the literature are: the degree to which morphology is expressed simultaneously (rather than sequentially), and the degree to which iconicity is used, particularly in predicates of motion and location, often referred to as classifier predicates. In this paper we analyze a set of properties marking agency and number in four sign languages for their crosslinguistic similarities and differences regarding simultaneity and iconicity. Data from American Sign Language (ASL), Italian Sign Language (LIS), British Sign Language (BSL), and Hong Kong Sign Language (HKSL) are analyzed. We find that iconic, cognitive, phonological, and morphological factors contribute to the distribution of these properties. We conduct two analyses—one of verbs and one of verb phrases. The analysis of classifier verbs shows that, as expected, all four languages exhibit many common formal and iconic properties in the expression of agency and number. The analysis of classifier verb phrases (VPs)—particularly, multiple-verb predicates—reveals (a) that it is grammatical in all four languages to express agency and number within a single verb, but also (b) that there is crosslinguistic variation in expressing agency and number across the four languages. We argue that this variation is motivated by how each language prioritizes, or ranks, several constraints. The rankings can be captured in Optimality Theory. Some constraints in this account, such as a constraint to be redundant, are found in all information systems and might be considered non-linguistic; however, the variation in constraint ranking in verb phrases reveals the grammatical and arbitrary nature of linguistic systems.


2016 ◽  
Vol 28 (1) ◽  
pp. 20-40 ◽  
Author(s):  
Velia Cardin ◽  
Eleni Orfanidou ◽  
Lena Kästner ◽  
Jerker Rönnberg ◽  
Bencie Woll ◽  
...  

The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production.


Gesture ◽  
2013 ◽  
Vol 13 (1) ◽  
pp. 1-27 ◽  
Author(s):  
Rachel Sutton-Spence ◽  
Donna Jo Napoli

Sign Language poetry is especially valued for its presentation of strong visual images. Here, we explore the highly visual signs that British Sign Language and American Sign Language poets create as part of the ‘classifier system’ of their languages. Signed languages, as they create visually-motivated messages, utilise categoricity (more traditionally considered ‘language’) and analogy (more traditionally considered extra-linguistic and the domain of ‘gesture’). Classifiers in sign languages arguably show both these characteristics (Oviedo, 2004). In our discussion of sign language poetry, we see that poets take elements that are widely understood to be highly visual, closely representing their referents, and make them even more highly visual — so going beyond categorisation and into new areas of analogue.


2007 ◽  
Vol 10 (2) ◽  
pp. 177-200 ◽  
Author(s):  
Jordan Fenlon ◽  
Tanya Denmark ◽  
Ruth Campbell ◽  
Bencie Woll

Linguists have suggested that non-manual and manual markers are used in sign languages to indicate prosodic and syntactic boundaries. However, little is known about how native signers interpret non-manual and manual cues with respect to sentence boundaries. Six native signers of British Sign Language (BSL) were asked to mark sentence boundaries in two narratives: one presented in BSL and one in Swedish Sign Language (SSL). For comparative analysis, non-signers undertook the same tasks. Results indicated that both native signers and non-signers were able to use visual cues effectively in segmentation and that their decisions were not dependent on knowledge of the signed language. Signed narratives contain visible cues to their prosodic structure which are available to signers and non-signers alike.


2015 ◽  
Vol 1 (1) ◽  
Author(s):  
Kearsy Cormier ◽  
Jordan Fenlon ◽  
Adam Schembri

AbstractSign languages have traditionally been described as having a distinction between (1) arbitrary (referential or syntactic) space, considered to be a purely grammatical use of space in which locations arbitrarily represent concrete or abstract subject and/or object arguments using pronouns or indicating verbs, for example, and (2) motivated (topographic or surrogate) space, involving mapping of locations of concrete referents onto the signing space via classifier constructions. Some linguists have suggested that it may be misleading to see the two uses of space as being completely distinct from one another. In this study, we use conversational data from the British Sign Language Corpus (www.bslcorpusproject.org) to look at the use of space with modified indicating verbs – specifically the directions in which these verbs are used as well as the co-occurrence of eyegaze shifts and constructed action. Our findings suggest that indicating verbs are frequently produced in conditions that use space in a motivated way and are rarely modified using arbitrary space. This contrasts with previous claims that indicating verbs in BSL prototypically use arbitrary space. We discuss the implications of this for theories about grammaticalisation and the role of gesture in sign languages and for sign language teaching.


2020 ◽  
pp. 026765832090685
Author(s):  
Sannah Gulamani ◽  
Chloë Marshall ◽  
Gary Morgan

Little is known about how hearing adults learn sign languages. Our objective in this study was to investigate how learners of British Sign Language (BSL) produce narratives, and we focused in particular on viewpoint-taking. Twenty-three intermediate-level learners of BSL and 10 deaf native/early signers produced a narrative in BSL using the wordless picture book Frog, where are you? (Mayer, 1969). We selected specific episodes from part of the book that provided rich opportunities for shifting between different characters and taking on different viewpoints. We coded for details of story content, the frequency with which different viewpoints were used and how long those viewpoints were used for, and the numbers of articulators that were used simultaneously. We found that even though learners’ and deaf signers’ narratives did not differ in overall duration, learners’ narratives had less content. Learners used character viewpoint less frequently than deaf signers. Although learners spent just as long as deaf signers in character viewpoint, they spent longer than deaf signers in observer viewpoint. Together, these findings suggest that character viewpoint was harder than observer viewpoint for learners. Furthermore, learners were less skilled than deaf signers in using multiple articulators simultaneously. We conclude that challenges for learners of sign include taking character viewpoint when narrating a story and encoding information across multiple articulators simultaneously.


Sign languages are visual languages that use hand, facial and body movements as a means of communication. There are over 135 different sign languages all around the world including American Sign Language (ASL), Indian Sign Language (ISL) and British Sign Language (BSL). Sign language is commonly used as the main form of communication for people who are Deaf or hard of hearing, but sign languages also have a lot to offer for everyone. In our proposed system, we are creating a Web Application which contains two modules: The first module will accept the Information in Natural Language (Input Text) and it will show the corresponding Information in Sign Language Images (GIF Format). The second module will accept the Information in Sign Language (Input Hand Gesture of any ASL Letter) and it will detect the Letter and display it as the Output (Text). The system is built to bridge the communication gap between deaf-mute people and regular people as those who don’t know the American Sign Language can either use it to learn the Sign Language or to communicate with someone who knows the Sign Language. This approach will help users in quick communication without having to wait for any human interpreter to translate the Sign Language. The application is developed using Django and Flask frameworks and it includes NLP and Neural Network. We are focusing on improving the Living standards of the hearing impaired people as it can be very difficult to perform everyday tasks especially when people around them don’t know Sign Language. This application can also be used as a teaching tool for relatives and friends of deaf people as well as people interested in learning the sign language.


1982 ◽  
Vol 1031 (1) ◽  
pp. 155-178
Author(s):  
James G. Kyle ◽  
Bencie Woll ◽  
Peter Llewellyn-Jones

Sign in / Sign up

Export Citation Format

Share Document