Sign language, like spoken language, promotes object categorization in young hearing infants

Cognition ◽  
2021 ◽  
Vol 215 ◽  
pp. 104845
Author(s):  
Miriam A. Novack ◽  
Diane Brentari ◽  
Susan Goldin-Meadow ◽  
Sandra Waxman
2017 ◽  
Vol 2 (12) ◽  
pp. 81-88
Author(s):  
Sandy K. Bowen ◽  
Silvia M. Correa-Torres

America's population is more diverse than ever before. The prevalence of students who are culturally and/or linguistically diverse (CLD) has been steadily increasing over the past decade. The changes in America's demographics require teachers who provide services to students with deafblindness to have an increased awareness of different cultures and diversity in today's classrooms, particularly regarding communication choices. Children who are deafblind may use spoken language with appropriate amplification, sign language or modified sign language, and/or some form of augmentative and alternative communication (AAC).


1999 ◽  
Vol 2 (2) ◽  
pp. 187-215 ◽  
Author(s):  
Wendy Sandler

In natural communication, the medium through which language is transmitted plays an important and systematic role. Sentences are broken up rhythmically into chunks; certain elements receive special stress; and, in spoken language, intonational tunes are superimposed onto these chunks in particular ways — all resulting in an intricate system of prosody. Investigations of prosody in Israeli Sign Language demonstrate that sign languages have comparable prosodic systems to those of spoken languages, although the phonetic medium is completely different. Evidence for the prosodic word and for the phonological phrase in ISL is examined here within the context of the relationship between the medium and the message. New evidence is offered to support the claim that facial expression in sign languages corresponds to intonation in spoken languages, and the term “superarticulation” is coined to describe this system in sign languages. Interesting formaldiffer ences between the intonationaltunes of spoken language and the “superarticulatory arrays” of sign language are shown to offer a new perspective on the relation between the phonetic basis of language, its phonological organization, and its communicative content.


2018 ◽  
Vol 44 (3-4) ◽  
pp. 123-208 ◽  
Author(s):  
Philippe Schlenker

AbstractWhile it is now accepted that sign languages should inform and constrain theories of ‘Universal Grammar’, their role in ‘Universal Semantics’ has been under-studied. We argue that they have a crucial role to play in the foundations of semantics, for two reasons. First, in some casessign languages provide overt evidence on crucial aspects of the Logical Form of sentences, ones that are only inferred indirectly in spoken language. For instance, sign language ‘loci’ are positions in signing space that can arguably realize logical variables, and the fact that they are overt makes it possible to revisit foundational debates about the syntactic reality of variables, about mechanisms of temporal and modal anaphora, and about the existence of dynamic binding. Another example pertains to mechanisms of ‘context shift’, which were postulated on the basis of indirect evidence in spoken language, but which are arguably overt in sign language. Second, along one dimensionsign languages are strictly more expressive than spoken languagesbecause iconic phenomena can be found at their logical core. This applies to loci themselves, which maysimultaneouslyfunction as logical variables and as schematic pictures of what they denote (context shift comes with some iconic requirements as well). As a result, the semantic system of spoken languages can in some respects be seen as a simplified version of the richer semantics found in sign languages. Two conclusions could be drawn from this observation. One is that the full extent of Universal Semantics can only be studied in sign languages. An alternative possibility is that spoken languages have comparable expressive mechanisms, but only when co-speech gestures are taken into account (as recently argued by Goldin-Meadow and Brentari). Either way, sign languages have a crucial role to play in investigations of the foundations of semantics.


Author(s):  
Marc Marschark ◽  
Harry Knoors ◽  
Shirin Antia

This chapter discusses similarities and differences among the co-enrollment programs described in this volume. In doing so, it emphasizes the diversity among deaf learners and the concomitant difficulty of a “one size fits all” approach to co-enrollment programs as well as to deaf education at large. The programs described in this book thus understandably are also diverse in their approach to programming and to communication, in particular. For example, many encourage flexible use of spoken and sign modalities to encourage communication between DHH students, their hearing peers, and their classroom teachers. Others emphasize spoken language or sign language. Several programs include multi-grade classrooms, allowing DHH students to benefit socially and academically from active engagement in the classroom, and some report positive social and academic outcomes. Most programs follow a general education curriculum; all emphasize collaboration among staff as the key to success.


Author(s):  
Johannes Hennies ◽  
Kristin Hennies

In 2016, the first German bimodal bilingual co-enrollment program for deaf and hard-of-hearing (DHH) students, CODAs, and other hearing children was established in Erfurt, Thuringia. There is a tradition of different models of co-enrollment for DHH children in a spoken language setting in Germany, but there has been no permanent program for co-enrollment of DHH children who use sign language so far. This program draws from the experience of an existing model in Austria to enroll a group of DHH children using sign language in a regular school and from two well-documented bimodal bilingual programs in German schools for the deaf. The chapter describes the preconditions for the project, the political circumstances of the establishment of bimodal bilingual co-enrollment, and the factors that seem crucial for successful realization.


2019 ◽  
Vol 39 (4) ◽  
pp. 367-395 ◽  
Author(s):  
Matthew L. Hall ◽  
Wyatte C. Hall ◽  
Naomi K. Caselli

Deaf and Hard of Hearing (DHH) children need to master at least one language (spoken or signed) to reach their full potential. Providing access to a natural sign language supports this goal. Despite evidence that natural sign languages are beneficial to DHH children, many researchers and practitioners advise families to focus exclusively on spoken language. We critique the Pediatrics article ‘Early Sign Language Exposure and Cochlear Implants’ (Geers et al., 2017) as an example of research that makes unsupported claims against the inclusion of natural sign languages. We refute claims that (1) there are harmful effects of sign language and (2) that listening and spoken language are necessary for optimal development of deaf children. While practical challenges remain (and are discussed) for providing a sign language-rich environment, research evidence suggests that such challenges are worth tackling in light of natural sign languages providing a host of benefits for DHH children – especially in the prevention and reduction of language deprivation.


2020 ◽  
Author(s):  
Ludivine Crible ◽  
Sílvia Gabarró-López

Abstract This paper provides the first contrastive analysis of a coherence relation (viz. addition) and its connectives across a sign language (French Belgian Sign Language) and a spoken language (French), both used in the same geographical area. The analysis examines the frequency and types of connectives that can express an additive relation, in order to contrast its “markedness” in the two languages, that is, whether addition is marked by dedicated connectives or by ambiguous, polyfunctional ones. Furthermore, we investigate the functions of the most frequent additive connective in each language (namely et and the sign SAME), starting from the observation that most connectives are highly polyfunctional. This analysis intends to show which functions are compatible with the meaning of addition in spoken and signed discourse. Despite a common core of shared discourse functions, the equivalence between et and SAME is only partial and relates to a difference in their semantics.


2008 ◽  
Vol 20 (2) ◽  
pp. 121-133 ◽  
Author(s):  
Philippe Dreuw ◽  
Daniel Stein ◽  
Thomas Deselaers ◽  
David Rybach ◽  
Morteza Zahedi ◽  
...  

2018 ◽  
Vol 129 ◽  
pp. e42-e43
Author(s):  
Jennifer Shum ◽  
Daniel Friedman ◽  
Patricia C. Dugan ◽  
Orrin Devinsky ◽  
Adeen Flinker

Sign in / Sign up

Export Citation Format

Share Document