Sign Language—Spoken Language Bilingualism: Code Mixing and Mode Mixing by ASL-English Bilinguals

2008 ◽  
pp. 312-335 ◽  
Author(s):  
Gerald P. Berent
2016 ◽  
Vol 21 (1) ◽  
pp. 104-120 ◽  
Author(s):  
RICHARD BANK ◽  
ONNO CRASBORN ◽  
ROELAND VAN HOUT

Mouthings, the spoken language elements in sign language discourse, are typically analysed as having a redundant, one-on-one relationship with manual signs, both semantically and temporally. We explore exceptions to this presupposed semantic and temporal congruency in a corpus of spontaneous signed conversation by deaf users of Sign Language of the Netherlands (NGT). We identify specifying mouthings (words with a different meaning than the co-occurring sign), solo mouthings (uttered while the hands are inactive) and added mouthings (words added to a signing stream without their corresponding sign), and make a sentence-level analysis of their occurrences. These non-redundant mouthings occurred in 12% of all utterances, and were made by almost all signers. We argue for the presence of a code-blending continuum for NGT, where NGT is the matrix language and spoken Dutch is blended in, in various degrees. We suggest expansion of existing code-mixing models, to allow for description of bimodal mixing.


2016 ◽  
Vol 20 (5) ◽  
pp. 947-964 ◽  
Author(s):  
LAURA KANTO ◽  
MARJA-LEENA LAAKSO ◽  
KERTTU HUTTUNEN

In this study we followed the characteristics and use of code-mixing by eight KODAs – hearing children of Deaf parents – from the age of 12 to 36 months. The children's interaction was video-recorded twice a year during three different play sessions: with their Deaf parent, with the Deaf parent and a hearing adult, and with the hearing adult alone. Additionally, data were collected on the children's overall language development in both sign language and spoken language. Our results showed that the children preferred to produce code-blends – simultaneous production of semantically congruent signs and words – in a way that was in accordance with the morphosyntactic structure of both languages being acquired. A Deaf parent as the interlocutor increased the number of and affected the type of code-blended utterances. These findings suggest that code-mixing in young bimodal bilingual KODA children can be highly systematic and synchronised in nature and can indicate pragmatic development.


Linguistics ◽  
2016 ◽  
Vol 54 (6) ◽  
Author(s):  
Richard Bank ◽  
Onno Crasborn ◽  
Roeland van Hout

Abstractin sign languages consist of simultaneously articulated manual signs and spoken language words. These “mouthings” (typically silent articulations) have been observed for many different sign languages. The present study aims to investigate the extent of such bimodal code-mixing in sign languages by investigating the frequency of mouthings produced by deaf users of Sign Language of the Netherlands (NGT), their co-occurrence with pointing signs, and whether any differences can be explained by sociolinguistic variables such as regional origin and age of the signer. We investigated over 10,000 mouth actions from 70 signers, and found that the mouth and the hands are equally active during signing. Moreover, around 80 % of all mouth actions are mouthings, while the remaining 20 % are unrelated to Dutch. We found frequency differences between individual signers and a small effect for level of education, but not for other sociolinguistic variables. Our results provide genuine evidence that mouthings form an inextricable component of signed interaction. Rather than displaying effects of competition between languages or spoken language suppression, NGT signers demonstrate the potential of the visual modality to conjoin parallel information streams.


2017 ◽  
Vol 2 (12) ◽  
pp. 81-88
Author(s):  
Sandy K. Bowen ◽  
Silvia M. Correa-Torres

America's population is more diverse than ever before. The prevalence of students who are culturally and/or linguistically diverse (CLD) has been steadily increasing over the past decade. The changes in America's demographics require teachers who provide services to students with deafblindness to have an increased awareness of different cultures and diversity in today's classrooms, particularly regarding communication choices. Children who are deafblind may use spoken language with appropriate amplification, sign language or modified sign language, and/or some form of augmentative and alternative communication (AAC).


1999 ◽  
Vol 2 (2) ◽  
pp. 187-215 ◽  
Author(s):  
Wendy Sandler

In natural communication, the medium through which language is transmitted plays an important and systematic role. Sentences are broken up rhythmically into chunks; certain elements receive special stress; and, in spoken language, intonational tunes are superimposed onto these chunks in particular ways — all resulting in an intricate system of prosody. Investigations of prosody in Israeli Sign Language demonstrate that sign languages have comparable prosodic systems to those of spoken languages, although the phonetic medium is completely different. Evidence for the prosodic word and for the phonological phrase in ISL is examined here within the context of the relationship between the medium and the message. New evidence is offered to support the claim that facial expression in sign languages corresponds to intonation in spoken languages, and the term “superarticulation” is coined to describe this system in sign languages. Interesting formaldiffer ences between the intonationaltunes of spoken language and the “superarticulatory arrays” of sign language are shown to offer a new perspective on the relation between the phonetic basis of language, its phonological organization, and its communicative content.


2018 ◽  
Vol 44 (3-4) ◽  
pp. 123-208 ◽  
Author(s):  
Philippe Schlenker

AbstractWhile it is now accepted that sign languages should inform and constrain theories of ‘Universal Grammar’, their role in ‘Universal Semantics’ has been under-studied. We argue that they have a crucial role to play in the foundations of semantics, for two reasons. First, in some casessign languages provide overt evidence on crucial aspects of the Logical Form of sentences, ones that are only inferred indirectly in spoken language. For instance, sign language ‘loci’ are positions in signing space that can arguably realize logical variables, and the fact that they are overt makes it possible to revisit foundational debates about the syntactic reality of variables, about mechanisms of temporal and modal anaphora, and about the existence of dynamic binding. Another example pertains to mechanisms of ‘context shift’, which were postulated on the basis of indirect evidence in spoken language, but which are arguably overt in sign language. Second, along one dimensionsign languages are strictly more expressive than spoken languagesbecause iconic phenomena can be found at their logical core. This applies to loci themselves, which maysimultaneouslyfunction as logical variables and as schematic pictures of what they denote (context shift comes with some iconic requirements as well). As a result, the semantic system of spoken languages can in some respects be seen as a simplified version of the richer semantics found in sign languages. Two conclusions could be drawn from this observation. One is that the full extent of Universal Semantics can only be studied in sign languages. An alternative possibility is that spoken languages have comparable expressive mechanisms, but only when co-speech gestures are taken into account (as recently argued by Goldin-Meadow and Brentari). Either way, sign languages have a crucial role to play in investigations of the foundations of semantics.


Author(s):  
Marc Marschark ◽  
Harry Knoors ◽  
Shirin Antia

This chapter discusses similarities and differences among the co-enrollment programs described in this volume. In doing so, it emphasizes the diversity among deaf learners and the concomitant difficulty of a “one size fits all” approach to co-enrollment programs as well as to deaf education at large. The programs described in this book thus understandably are also diverse in their approach to programming and to communication, in particular. For example, many encourage flexible use of spoken and sign modalities to encourage communication between DHH students, their hearing peers, and their classroom teachers. Others emphasize spoken language or sign language. Several programs include multi-grade classrooms, allowing DHH students to benefit socially and academically from active engagement in the classroom, and some report positive social and academic outcomes. Most programs follow a general education curriculum; all emphasize collaboration among staff as the key to success.


Author(s):  
Johannes Hennies ◽  
Kristin Hennies

In 2016, the first German bimodal bilingual co-enrollment program for deaf and hard-of-hearing (DHH) students, CODAs, and other hearing children was established in Erfurt, Thuringia. There is a tradition of different models of co-enrollment for DHH children in a spoken language setting in Germany, but there has been no permanent program for co-enrollment of DHH children who use sign language so far. This program draws from the experience of an existing model in Austria to enroll a group of DHH children using sign language in a regular school and from two well-documented bimodal bilingual programs in German schools for the deaf. The chapter describes the preconditions for the project, the political circumstances of the establishment of bimodal bilingual co-enrollment, and the factors that seem crucial for successful realization.


2019 ◽  
Vol 39 (4) ◽  
pp. 367-395 ◽  
Author(s):  
Matthew L. Hall ◽  
Wyatte C. Hall ◽  
Naomi K. Caselli

Deaf and Hard of Hearing (DHH) children need to master at least one language (spoken or signed) to reach their full potential. Providing access to a natural sign language supports this goal. Despite evidence that natural sign languages are beneficial to DHH children, many researchers and practitioners advise families to focus exclusively on spoken language. We critique the Pediatrics article ‘Early Sign Language Exposure and Cochlear Implants’ (Geers et al., 2017) as an example of research that makes unsupported claims against the inclusion of natural sign languages. We refute claims that (1) there are harmful effects of sign language and (2) that listening and spoken language are necessary for optimal development of deaf children. While practical challenges remain (and are discussed) for providing a sign language-rich environment, research evidence suggests that such challenges are worth tackling in light of natural sign languages providing a host of benefits for DHH children – especially in the prevention and reduction of language deprivation.


Sign in / Sign up

Export Citation Format

Share Document