3. Deaf Persons and Sign Languages

2020 ◽  
pp. 55-92
Author(s):  
John D. Bonvillian ◽  
Nicole Kissane Lee ◽  
Tracy T. Dooley ◽  
Filip T. Loncke

Chapter 3 introduces the reader to various aspects of sign languages, including their historical development and use within educational contexts by Deaf communities in Europe and the United States. Also covered is the initiation of the field of sign language linguistics by William C. Stokoe, a linguist who systematically proved that American Sign Language (ASL) was indeed a language with its own distinct structure and properties that differed from any spoken language. The phonological parameters of signs receive considerable attention, highlighting ways in which the unique properties of sign languages allow them to represent meaning in ways that are more consistently transparent and iconic than similar phenomena in the speech modality. Despite these similarities across sign languages, the differences among the sign languages of the world led Deaf persons to create and develop the lingua franca of International Sign (previously Gestuno) for use at international conventions. Finally, the similarities and distinctions between the processes of language development and acquisition across the modalities of speech and sign are discussed, as well as how signing benefits the learning of spoken language vocabulary by hearing children.

Author(s):  
Franc Solina ◽  
Slavko Krapez ◽  
Ales Jaklic ◽  
Vito Komac

Deaf people, as a marginal community, may have severe problems in communicating with hearing people. Usually, they have a lot of problems even with such—for hearing people—simple tasks as understanding the written language. However, deaf people are very skilled in using a sign language, which is their native language. A sign language is a set of signs or hand gestures. A gesture in a sign language equals a word in a written language. Similarly, a sentence in a written language equals a sequence of gestures in a sign language. In the distant past deaf people were discriminated and believed to be incapable of learning and thinking independently. Only after the year 1500 were the first attempts made to educate deaf children. An important breakthrough was the realization that hearing is not a prerequisite for understanding ideas. One of the most important early educators of the deaf and the first promoter of sign language was Charles Michel De L’Epée (1712-1789) in France. He founded the fist public school for deaf people. His teachings about sign language quickly spread all over the world. Like spoken languages, different sign languages and dialects evolved around the world. According to the National Association of the Deaf, the American Sign Language (ASL) is the third most frequently used language in the United States, after English and Spanish. ASL has more than 4,400 distinct signs. The Slovenian sign language (SSL), which is used in Slovenia and also serves as a case study sign language in this chapter, contains approximately 4,000 different gestures for common words. Signs require one or both hands for signing. Facial expressions which accompany signing are also important since they can modify the basic meaning of a hand gesture. To communicate proper nouns and obscure words, sign languages employ finger spelling. Since the majority of signing is with full words, signed conversation can proceed with the same pace as spoken conversation.


Author(s):  
Marc Marschark ◽  
Harry G. Lang ◽  
John A. Albertini

To understand the complex relations between language and learning, we have to look at both how children learn language and what it is that they learn that allows them to communicate with others. To accomplish this, we need to distinguish between apparent differences in language that are related to the modality of communication and actual differences in language fluencies observed among deaf children. It also will help to examine some relevant differences between deaf children and hearing children. We have already pointed out that the distinction between spoken language and sign language, while a theoretically important one for researchers, is an oversimplification for most practical purposes. It is rare that deaf children are exposed only to spoken language or sign language, even if that is the intention of their parents or teachers. According to 1999 data, approximately 55 percent of deaf children in the United States are formally educated in programs that report either using sign language exclusively (just over 5 percent) or signed and spoken language together (just over 49 percent) (Gallaudet University, Center for Applied Demographic Statistics). Because almost half of all deaf children in the United States are missed in such surveys, however, these numbers only should be taken as approximate. Comparisons of the language abilities of deaf children who primarily use sign language with those who primarily use spoken language represent one of the most popular and potentially informative areas in research relating to language development and academic success. Unfortunately, this area is also one of the most complex. Educational programs emphasizing spoken or sign language often have different educational philosophies and curricula as well as different communication philosophies. Programs may only admit children with particular histories of early intervention, and parents will be drawn to different programs for a variety of reasons. Differences observed between children from any two programs thus might be the result of a number of variables rather than, or in addition to, language modality per se. Even when deaf children are educated in spoken language environments, they often develop systems of gestural communication with their parents (Greenberg et al., 1984).


2021 ◽  
Vol 8 (3) ◽  
pp. 110-132
Author(s):  
Khunaw Sulaiman Pirot ◽  
Wrya Izaddin Ali

This paper entitled ‘The Common Misconceptions about Sign Language’ is concerned with the most common misconceptions about sign language. It also deals with sign language and its relation with the spoken language. Sign language, primarily used by deaf people, is a fully-developed human language that does not use sounds for communication, but it is a visual-gestural system that uses hands, body and facial gestures. One of the misconceptions is that all sign languages are the same in the worldwide. Such assumptions cause problems. Accordingly, some questions have been raised: first, is sign language universal? Second, is sign language based on spoken language? And third, is sign language invented by hearing people?      The aim of the paper is to have a deeper understanding about sign language. It also demonstrates the similarities and differences between the two different modalities: sign language and spoken language. The paper is based on some hypothesis. One of the hypotheses is that sign languages are pantomimes and gestures. It also hypothesizes that the process of language acquisition in sign language for deaf people is different from the language acquisition in spoken language for hearing people.     To answer the questions raised, the qualitative approach is adopted. The procedure is to collect data about the subject from books and articles and then analyze the data to obtain the aim of the study.  One of the conclusions is that sign language is not universal. It is recommended that more work can be carried out on the differences between either American Sign Language (ASL) or British Sign Language (BSL) with reference to zmânî âmâžaî kurdî (ZAK) Kurdish Sign Language) at all linguistic levels.   


Gesture ◽  
2004 ◽  
Vol 4 (1) ◽  
pp. 75-89 ◽  
Author(s):  
David MacGregor

In analyzing the use of space in American Sign Language (ASL), Liddell (2003) argues convincingly that no account of ASL can be complete without a discussion of how linguistic signs and non-linguistic gestures and gradient phenomena work together to create meaning. This represents a departure from the assumptions of much of linguistic theory, which has attempted to describe purely linguistic phenomena as part of an autonomous system. It also raises the question of whether these phenomena are peculiar to ASL and other sign languages, or if they also apply to spoken language. In this paper, I show how Liddell’s approach can be applied to English data to provide a fuller explanation of how speakers create meaning. Specifically, I analyze Jack Lemmons’ use of space, gesture, and voice in a scene from the movie “Mr. Roberts”.


1986 ◽  
Vol 24 ◽  
pp. 111-117
Author(s):  
Trude Schermer

Until the sixties linguists didn't show any interest in the natural language of prelingually deaf people. Generally speaking their communication system was not considered a real language comparable to any spoken language. The signs used by deaf people were taken as natural gestures. In 1880 at the Milan conference on deaf education it was decided that signs should no longer be used in the schools for the deaf and that deaf people should not be allowed to use their own communication system. Instead, the spoken language of the hearing environment should be learned. At that time deaf educators were convinced of the damaging influence on spoken language development of the use of signs. However, there is no evidence for this. On the contrary, research has shown that the use of sign language as a first language improves the communicative abilities of the deaf people, which could be the basis for learning the spoken language. Despite this resolution deaf communities continued, albeit isola-ted and not openly, to use their own communication system. In 1963 a book was published by an American linguist, William Stokoe, that changed the way in which people thought about sign language. He showed how signs can be analysed into elements comparable to phonemes in spoken language and started the lingu-istic research on grammatical aspects of American Sign Language. This research showed that sign language is indeed a 'real' language, equal to any spoken language and that deaf people should have the right to use this language. Following American research, many linguists in Europe discovered' sign languages in their countries. Even in traditionally oral countries like the Netherlands and Belgium. In this paper some grammatical aspects of sign languages are discussed.


2020 ◽  
Vol 37 (4) ◽  
pp. 571-608
Author(s):  
Diane Brentari ◽  
Laura Horton ◽  
Susan Goldin-Meadow

Abstract Two differences between signed and spoken languages that have been widely discussed in the literature are: the degree to which morphology is expressed simultaneously (rather than sequentially), and the degree to which iconicity is used, particularly in predicates of motion and location, often referred to as classifier predicates. In this paper we analyze a set of properties marking agency and number in four sign languages for their crosslinguistic similarities and differences regarding simultaneity and iconicity. Data from American Sign Language (ASL), Italian Sign Language (LIS), British Sign Language (BSL), and Hong Kong Sign Language (HKSL) are analyzed. We find that iconic, cognitive, phonological, and morphological factors contribute to the distribution of these properties. We conduct two analyses—one of verbs and one of verb phrases. The analysis of classifier verbs shows that, as expected, all four languages exhibit many common formal and iconic properties in the expression of agency and number. The analysis of classifier verb phrases (VPs)—particularly, multiple-verb predicates—reveals (a) that it is grammatical in all four languages to express agency and number within a single verb, but also (b) that there is crosslinguistic variation in expressing agency and number across the four languages. We argue that this variation is motivated by how each language prioritizes, or ranks, several constraints. The rankings can be captured in Optimality Theory. Some constraints in this account, such as a constraint to be redundant, are found in all information systems and might be considered non-linguistic; however, the variation in constraint ranking in verb phrases reveals the grammatical and arbitrary nature of linguistic systems.


2021 ◽  
pp. 095679762199155
Author(s):  
Amanda R. Brown ◽  
Wim Pouw ◽  
Diane Brentari ◽  
Susan Goldin-Meadow

When we use our hands to estimate the length of a stick in the Müller-Lyer illusion, we are highly susceptible to the illusion. But when we prepare to act on sticks under the same conditions, we are significantly less susceptible. Here, we asked whether people are susceptible to illusion when they use their hands not to act on objects but to describe them in spontaneous co-speech gestures or conventional sign languages of the deaf. Thirty-two English speakers and 13 American Sign Language signers used their hands to act on, estimate the length of, and describe sticks eliciting the Müller-Lyer illusion. For both gesture and sign, the magnitude of illusion in the description task was smaller than the magnitude of illusion in the estimation task and not different from the magnitude of illusion in the action task. The mechanisms responsible for producing gesture in speech and sign thus appear to operate not on percepts involved in estimation but on percepts derived from the way we act on objects.


1999 ◽  
Vol 2 (2) ◽  
pp. 187-215 ◽  
Author(s):  
Wendy Sandler

In natural communication, the medium through which language is transmitted plays an important and systematic role. Sentences are broken up rhythmically into chunks; certain elements receive special stress; and, in spoken language, intonational tunes are superimposed onto these chunks in particular ways — all resulting in an intricate system of prosody. Investigations of prosody in Israeli Sign Language demonstrate that sign languages have comparable prosodic systems to those of spoken languages, although the phonetic medium is completely different. Evidence for the prosodic word and for the phonological phrase in ISL is examined here within the context of the relationship between the medium and the message. New evidence is offered to support the claim that facial expression in sign languages corresponds to intonation in spoken languages, and the term “superarticulation” is coined to describe this system in sign languages. Interesting formaldiffer ences between the intonationaltunes of spoken language and the “superarticulatory arrays” of sign language are shown to offer a new perspective on the relation between the phonetic basis of language, its phonological organization, and its communicative content.


2018 ◽  
Vol 44 (3-4) ◽  
pp. 123-208 ◽  
Author(s):  
Philippe Schlenker

AbstractWhile it is now accepted that sign languages should inform and constrain theories of ‘Universal Grammar’, their role in ‘Universal Semantics’ has been under-studied. We argue that they have a crucial role to play in the foundations of semantics, for two reasons. First, in some casessign languages provide overt evidence on crucial aspects of the Logical Form of sentences, ones that are only inferred indirectly in spoken language. For instance, sign language ‘loci’ are positions in signing space that can arguably realize logical variables, and the fact that they are overt makes it possible to revisit foundational debates about the syntactic reality of variables, about mechanisms of temporal and modal anaphora, and about the existence of dynamic binding. Another example pertains to mechanisms of ‘context shift’, which were postulated on the basis of indirect evidence in spoken language, but which are arguably overt in sign language. Second, along one dimensionsign languages are strictly more expressive than spoken languagesbecause iconic phenomena can be found at their logical core. This applies to loci themselves, which maysimultaneouslyfunction as logical variables and as schematic pictures of what they denote (context shift comes with some iconic requirements as well). As a result, the semantic system of spoken languages can in some respects be seen as a simplified version of the richer semantics found in sign languages. Two conclusions could be drawn from this observation. One is that the full extent of Universal Semantics can only be studied in sign languages. An alternative possibility is that spoken languages have comparable expressive mechanisms, but only when co-speech gestures are taken into account (as recently argued by Goldin-Meadow and Brentari). Either way, sign languages have a crucial role to play in investigations of the foundations of semantics.


2021 ◽  
Author(s):  
Kathryn Woodcock ◽  
Steven L. Fischer

<div>"This Guide is intended for working interpreters, interpreting students and educators, and those who employ or purchase the services of interpreters. Occupational health education is essential for professionals in training, to avoid early attrition from practice. "Sign language interpreting" is considered to include interpretation between American Sign Language (ASL) and English, other spoken languages and corresponding sign languages, and between sign languages (e.g., Deaf Interpreters). Some of the occupational health issues may also apply equally to Communication Access Realtime Translation (CART) reporters, oral interpreters, and intervenors. The reader is encouraged to make as much use as possible of the information provided here". -- Introduction.</div><div><br></div>


Sign in / Sign up

Export Citation Format

Share Document