scholarly journals Morphological Accuracy in the Speech of Bimodal Bilingual Children with CIs

2019 ◽  
Vol 24 (4) ◽  
pp. 435-447
Author(s):  
Corina Goodwin ◽  
Diane Lillo-Martin

AbstractSign language use in the (re)habilitation of children with cochlear implants (CIs) remains a controversial issue. Concerns that signing impedes spoken language development are based on research comparing children exposed to spoken and signed language (bilinguals) to children exposed only to speech (monolinguals), although abundant research demonstrates that bilinguals and monolinguals differ in language development. We control for bilingualism effects by comparing bimodal bilingual (signing-speaking) children with CIs (BB-CI) to those with typical hearing (BB-TH). Each child had at least one Deaf parent and was exposed to ASL from birth. The BB-THs were exposed to English from birth by hearing family members, while the BB-CIs began English exposure after cochlear implantation around 22-months-of-age. Elicited speech samples were analyzed for accuracy of English grammatical morpheme production. Although there was a trend toward lower overall accuracy in the BB-CIs, this seemed driven by increased omission of the plural -s, suggesting an exaggerated role of perceptual salience in this group. Errors of commission were rare in both groups. Because both groups were bimodal bilinguals, trends toward group differences were likely caused by delayed exposure to spoken language or hearing through a CI, rather than sign language exposure.

2019 ◽  
Vol 39 (4) ◽  
pp. 367-395 ◽  
Author(s):  
Matthew L. Hall ◽  
Wyatte C. Hall ◽  
Naomi K. Caselli

Deaf and Hard of Hearing (DHH) children need to master at least one language (spoken or signed) to reach their full potential. Providing access to a natural sign language supports this goal. Despite evidence that natural sign languages are beneficial to DHH children, many researchers and practitioners advise families to focus exclusively on spoken language. We critique the Pediatrics article ‘Early Sign Language Exposure and Cochlear Implants’ (Geers et al., 2017) as an example of research that makes unsupported claims against the inclusion of natural sign languages. We refute claims that (1) there are harmful effects of sign language and (2) that listening and spoken language are necessary for optimal development of deaf children. While practical challenges remain (and are discussed) for providing a sign language-rich environment, research evidence suggests that such challenges are worth tackling in light of natural sign languages providing a host of benefits for DHH children – especially in the prevention and reduction of language deprivation.


2018 ◽  
Vol 22 (2) ◽  
pp. 185-231 ◽  
Author(s):  
Trevor Johnston

Abstract Signed languages have been classified typologically as being manual dominant or non-manual dominant for negation. In the former negation is conveyed primarily by manual lexical signs whereas in the latter negation is primarily conveyed by nonmanual signs. In support of this typology, the site and spread of headshaking in negated clauses was also described as linguistically constrained. Headshaking was thus said to be a formal part of negation in signed languages so it was linguistic, not gestural. This paper aims to establish the role of headshaking in negation in Auslan with reference to this typology. In this corpus-based study, I show that Auslan users almost always negate clauses using a manual negative sign. Although headshakes are found in just over half of these manually negated clauses, the position and spreading behaviour of headshakes do not appear to be linguistically constrained. I also show that signers use headshakes as the sole negating element in a clause extremely rarely. I conclude that headshaking in Auslan appears similar to headshaking in the ambient face-to-face spoken language, English. I explore the implications of these findings for the proposed typology of negation in signed languages in terms of the type of data that were used to support it, and assumptions about the relationship between gesture and signed languages that underlie it.


2021 ◽  
pp. 145-152
Author(s):  
Amy Kissel Frisbie ◽  
Aaron Shield ◽  
Deborah Mood ◽  
Nicole Salamy ◽  
Jonathan Henner

This chapter is a joint discussion of key items presented in Chapters 4.1 and 4.2 related to the assessment of deaf and hearing children on the autism spectrum . From these chapters it becomes apparent that a number of aspects associated with signed language assessment are relevant to spoken language assessment. For example, there are several precautions to bear in mind about language assessments obtained via an interpreter. Some of these precautions apply solely to D/HH children, while others are applicable to assessments with hearing children in multilingual contexts. Equally, there are some aspects of spoken language assessment that can be applied to signed language assessment. These include the importance of assessing pragmatic language skills, assessing multiple areas of language development, differentiating between ASD and other developmental disorders, and completing the language evaluation within a developmental framework. The authors conclude with suggestions for both spoken and signed language assessment.


2019 ◽  
Vol 109 (2) ◽  
pp. 332-341 ◽  
Author(s):  
Eva Karltorp ◽  
Martin Eklöf ◽  
Elisabet Östlund ◽  
Filip Asp ◽  
Bo Tideholm ◽  
...  

Author(s):  
David Quinto-Pozos ◽  
Robert Adam

Language contact of various kinds is the norm in Deaf communities throughout the world, and this allows for exploration of the role of the different kinds of modality (be it spoken, signed or written, or a combination of these) and the channel of communication in language contact. Drawing its evidence largely from instances of American Sign Language (ASL) this chapter addresses and illustrates several of these themes: sign-speech contact, sign-writing contact, and sign-sign contact, examining instances of borrowing and bilingualism between some of these modalities, and compares these to contact between hearing users of spoken languages, specifically in this case American English.


Gesture ◽  
2004 ◽  
Vol 4 (1) ◽  
pp. 43-73 ◽  
Author(s):  
Sherman Wilcox

In this paper I explore the role of gesture in the development of signed languages. Using data from American Sign Language, Catalan Sign Language, French Sign Language, and Italian Sign Language, as well as historical sources describing gesture in the Mediterranean region, I demonstrate that gesture enters the linguistic system via two distinct routes. In one, gesture serves as a source of lexical and grammatical morphemes in signed languages. In the second, elements become directly incorporated into signed language morphology, bypassing the lexical stage. Finally, I propose a unifying framework for understanding the gesture-language interface in signed and spoken languages.


2020 ◽  
Vol 6 (1) ◽  
pp. 13-52
Author(s):  
Lauren W. Reed

Abstract Abstract (Australian Sign Language) Most bilingualism and translanguaging studies focus on spoken language; less is known about how people use two or more ways of signing. Here, I take steps towards redressing this imbalance, presenting a case study of signed language in Port Moresby, Papua New Guinea. The study’s methodology is participant observation and analysis of conversational recordings between deaf signers. The Port Moresby deaf community uses two ways of signing: sign language and culture. sign language is around 30 years old, and its lexicon is drawn largely from Australasian Signed English. In contrast, culture – which is as old as each individual user – is characterised by signs of local origin, abundant depiction, and considerable individual variation. Despite sign language’s young age, its users have innovated a metalinguistic sign (switch-caps) to describe switching between ways of communicating. To conclude, I discuss how the Port Moresby situation challenges both the bilingualism and translanguaging approaches.


Target ◽  
1995 ◽  
Vol 7 (1) ◽  
pp. 135-149 ◽  
Author(s):  
William P. Isham

Abstract Research using interpreters who work with signed languages can aid us in understanding the cognitive processes of interpretation in general. Using American Sign Language (ASL) as an example, the nature of signed languages is outlined first. Then the difference between signed languages and manual codes for spoken languages is delineated, and it is argued that these two manners of communicating through the visual channel offer a unique research opportunity. Finally, an example from recent research is used to demonstrate how comparisons between spoken-language interpreters and signed-language interpreters can be used to test hypotheses regarding interpretation.


Sign in / Sign up

Export Citation Format

Share Document