Gebarentaal Van Doven

1986 ◽  
Vol 24 ◽  
pp. 111-117
Author(s):  
Trude Schermer

Until the sixties linguists didn't show any interest in the natural language of prelingually deaf people. Generally speaking their communication system was not considered a real language comparable to any spoken language. The signs used by deaf people were taken as natural gestures. In 1880 at the Milan conference on deaf education it was decided that signs should no longer be used in the schools for the deaf and that deaf people should not be allowed to use their own communication system. Instead, the spoken language of the hearing environment should be learned. At that time deaf educators were convinced of the damaging influence on spoken language development of the use of signs. However, there is no evidence for this. On the contrary, research has shown that the use of sign language as a first language improves the communicative abilities of the deaf people, which could be the basis for learning the spoken language. Despite this resolution deaf communities continued, albeit isola-ted and not openly, to use their own communication system. In 1963 a book was published by an American linguist, William Stokoe, that changed the way in which people thought about sign language. He showed how signs can be analysed into elements comparable to phonemes in spoken language and started the lingu-istic research on grammatical aspects of American Sign Language. This research showed that sign language is indeed a 'real' language, equal to any spoken language and that deaf people should have the right to use this language. Following American research, many linguists in Europe discovered' sign languages in their countries. Even in traditionally oral countries like the Netherlands and Belgium. In this paper some grammatical aspects of sign languages are discussed.

2021 ◽  
Vol 8 (3) ◽  
pp. 110-132
Author(s):  
Khunaw Sulaiman Pirot ◽  
Wrya Izaddin Ali

This paper entitled ‘The Common Misconceptions about Sign Language’ is concerned with the most common misconceptions about sign language. It also deals with sign language and its relation with the spoken language. Sign language, primarily used by deaf people, is a fully-developed human language that does not use sounds for communication, but it is a visual-gestural system that uses hands, body and facial gestures. One of the misconceptions is that all sign languages are the same in the worldwide. Such assumptions cause problems. Accordingly, some questions have been raised: first, is sign language universal? Second, is sign language based on spoken language? And third, is sign language invented by hearing people?      The aim of the paper is to have a deeper understanding about sign language. It also demonstrates the similarities and differences between the two different modalities: sign language and spoken language. The paper is based on some hypothesis. One of the hypotheses is that sign languages are pantomimes and gestures. It also hypothesizes that the process of language acquisition in sign language for deaf people is different from the language acquisition in spoken language for hearing people.     To answer the questions raised, the qualitative approach is adopted. The procedure is to collect data about the subject from books and articles and then analyze the data to obtain the aim of the study.  One of the conclusions is that sign language is not universal. It is recommended that more work can be carried out on the differences between either American Sign Language (ASL) or British Sign Language (BSL) with reference to zmânî âmâžaî kurdî (ZAK) Kurdish Sign Language) at all linguistic levels.   


Author(s):  
Franc Solina ◽  
Slavko Krapez ◽  
Ales Jaklic ◽  
Vito Komac

Deaf people, as a marginal community, may have severe problems in communicating with hearing people. Usually, they have a lot of problems even with such—for hearing people—simple tasks as understanding the written language. However, deaf people are very skilled in using a sign language, which is their native language. A sign language is a set of signs or hand gestures. A gesture in a sign language equals a word in a written language. Similarly, a sentence in a written language equals a sequence of gestures in a sign language. In the distant past deaf people were discriminated and believed to be incapable of learning and thinking independently. Only after the year 1500 were the first attempts made to educate deaf children. An important breakthrough was the realization that hearing is not a prerequisite for understanding ideas. One of the most important early educators of the deaf and the first promoter of sign language was Charles Michel De L’Epée (1712-1789) in France. He founded the fist public school for deaf people. His teachings about sign language quickly spread all over the world. Like spoken languages, different sign languages and dialects evolved around the world. According to the National Association of the Deaf, the American Sign Language (ASL) is the third most frequently used language in the United States, after English and Spanish. ASL has more than 4,400 distinct signs. The Slovenian sign language (SSL), which is used in Slovenia and also serves as a case study sign language in this chapter, contains approximately 4,000 different gestures for common words. Signs require one or both hands for signing. Facial expressions which accompany signing are also important since they can modify the basic meaning of a hand gesture. To communicate proper nouns and obscure words, sign languages employ finger spelling. Since the majority of signing is with full words, signed conversation can proceed with the same pace as spoken conversation.


PEDIATRICS ◽  
1994 ◽  
Vol 93 (1) ◽  
pp. A62-A62

Just as no one can pinpoint the origins of spoken language in prehistory, the roots of sign language remain hidden from view. What linguists do know is that sign languages have sprung up independently in many different places. Signing probably began with simple gestures, but then evolved into a true language with structured grammar. "In every place we've ever found deaf people, there's sign," says anthropological linguist Bob Johnson. But it's not the same language. "I went to a Mayan village where, out of 400 people, 13 were deaf, and they had their own Mayan Sign - I'd guess it's been maintained for thousands of years." Today at least 50 native sign languages are "spoken" worldwide, all mutually incomprehensible, from British and Israeli Sign to Chinese Sign.


1991 ◽  
Vol 39 ◽  
pp. 75-82
Author(s):  
Beppie van den Bogaerde

Sign Language of the Netherlands (SLN) is considered to be the native language of many prelingually deaf people in the Netherlands. Although research has provided evidence that sign languages are fully fletched natural languages, many misconceptions still abound about sign languages and deaf people. The low status of sign languages all over the world and the attitude of hearing people towards deaf people and their languages, and the resulting attitude of the deaf towards their own languages, restricted the development of these languages until recently. Due to the poor results of deaf education and the dissatisfaction amongst educators of the deaf, parents of deaf children and deaf people themselves, a change of attitude towards the function of sign language in the interaction with deaf people can be observed; many hearing people dealing with deaf people one way or the other wish to learn the sign language of the deaf community of their country. Many hearing parents of deaf children, teachers of the deaf, student-interpreters and linguists are interested in sign language and want to follow a course to improve their signing ability. In order to develop sign language courses, sign language teachers and teaching materials are needed. And precisely these are missing. This is caused by several factors. First, deaf people in general do not receive the same education as hearing people, due to their inability to learn the spoken language of their environment to such an extent, that they have access to the full eduational program. This prohibits them a.o. to become teachers in elementary and secondary schools, or to become sign language teachers. Althought they are fluent "signers", they lack the competence in the spoken language of their country to obtain a teacher's degree in their sign language. A second problem is caused by the fact, that sign languages are visual languages: no adequate system has yet been found to write down a sign language. So until now hardly any teaching materials were available. Sign language courses should be developed with the help of native signers who should be educated to become language-teachers; with their help and with the help of video-material and computer-software, it will be possible in future to teach sign languages as any other language. But in order to reach this goal, it is imperative that deaf children get a better education so that they can contribute to the emancipation of their language.


2020 ◽  
pp. 55-92
Author(s):  
John D. Bonvillian ◽  
Nicole Kissane Lee ◽  
Tracy T. Dooley ◽  
Filip T. Loncke

Chapter 3 introduces the reader to various aspects of sign languages, including their historical development and use within educational contexts by Deaf communities in Europe and the United States. Also covered is the initiation of the field of sign language linguistics by William C. Stokoe, a linguist who systematically proved that American Sign Language (ASL) was indeed a language with its own distinct structure and properties that differed from any spoken language. The phonological parameters of signs receive considerable attention, highlighting ways in which the unique properties of sign languages allow them to represent meaning in ways that are more consistently transparent and iconic than similar phenomena in the speech modality. Despite these similarities across sign languages, the differences among the sign languages of the world led Deaf persons to create and develop the lingua franca of International Sign (previously Gestuno) for use at international conventions. Finally, the similarities and distinctions between the processes of language development and acquisition across the modalities of speech and sign are discussed, as well as how signing benefits the learning of spoken language vocabulary by hearing children.


Gesture ◽  
2004 ◽  
Vol 4 (1) ◽  
pp. 75-89 ◽  
Author(s):  
David MacGregor

In analyzing the use of space in American Sign Language (ASL), Liddell (2003) argues convincingly that no account of ASL can be complete without a discussion of how linguistic signs and non-linguistic gestures and gradient phenomena work together to create meaning. This represents a departure from the assumptions of much of linguistic theory, which has attempted to describe purely linguistic phenomena as part of an autonomous system. It also raises the question of whether these phenomena are peculiar to ASL and other sign languages, or if they also apply to spoken language. In this paper, I show how Liddell’s approach can be applied to English data to provide a fuller explanation of how speakers create meaning. Specifically, I analyze Jack Lemmons’ use of space, gesture, and voice in a scene from the movie “Mr. Roberts”.


Author(s):  
R. Elakkiya ◽  
◽  
Mikhail Grif ◽  
Alexey Prikhodko ◽  
Maxim Bakaev ◽  
...  

In our paper, we consider approaches towards the recognition of sign languages used by the deaf people in Russia and India. The structure of the recognition system for individual gestures is proposed based on the identification of its five components: configuration, orientation, localization, movement and non-manual markers. We overview the methods applied for the recognition of both individual gestures and continuous Indian and Russian sign languages. In particular we consider the problem of building corpuses of sign languages, as well as sets of training data (datasets). We note the similarity of certain individual gestures in Russian and Indian sign languages and specify the structure of the local dataset for static gestures of the Russian sign language. For the dataset, 927 video files with static one-handed gestures were collected and converted to JSON using the OpenPose library. After analyzing 21 points of the skeletal model of the right hand, the obtained reliability for the choice of points equal to 0.61, which was found insufficient. It is noted that the recognition of individual gestures and sign speech in general is complicated by the need for accurate tracking of various components of the gestures, which are performed quite quickly and are complicated by overlapping hands and faces. To solve this problem, we further propose an approach related to the development of a biosimilar neural network, which is to process visual information similarly to the human cerebral cortex: identification of lines, construction of edges, detection of movements, identification of geometric shapes, determination of the direction and speed of the objects movement. We are currently testing a biologically similar neural network proposed by A.V. Kugaevskikh on video files from the Russian sign language dataset.


2020 ◽  
Vol 37 (4) ◽  
pp. 571-608
Author(s):  
Diane Brentari ◽  
Laura Horton ◽  
Susan Goldin-Meadow

Abstract Two differences between signed and spoken languages that have been widely discussed in the literature are: the degree to which morphology is expressed simultaneously (rather than sequentially), and the degree to which iconicity is used, particularly in predicates of motion and location, often referred to as classifier predicates. In this paper we analyze a set of properties marking agency and number in four sign languages for their crosslinguistic similarities and differences regarding simultaneity and iconicity. Data from American Sign Language (ASL), Italian Sign Language (LIS), British Sign Language (BSL), and Hong Kong Sign Language (HKSL) are analyzed. We find that iconic, cognitive, phonological, and morphological factors contribute to the distribution of these properties. We conduct two analyses—one of verbs and one of verb phrases. The analysis of classifier verbs shows that, as expected, all four languages exhibit many common formal and iconic properties in the expression of agency and number. The analysis of classifier verb phrases (VPs)—particularly, multiple-verb predicates—reveals (a) that it is grammatical in all four languages to express agency and number within a single verb, but also (b) that there is crosslinguistic variation in expressing agency and number across the four languages. We argue that this variation is motivated by how each language prioritizes, or ranks, several constraints. The rankings can be captured in Optimality Theory. Some constraints in this account, such as a constraint to be redundant, are found in all information systems and might be considered non-linguistic; however, the variation in constraint ranking in verb phrases reveals the grammatical and arbitrary nature of linguistic systems.


2021 ◽  
pp. 095679762199155
Author(s):  
Amanda R. Brown ◽  
Wim Pouw ◽  
Diane Brentari ◽  
Susan Goldin-Meadow

When we use our hands to estimate the length of a stick in the Müller-Lyer illusion, we are highly susceptible to the illusion. But when we prepare to act on sticks under the same conditions, we are significantly less susceptible. Here, we asked whether people are susceptible to illusion when they use their hands not to act on objects but to describe them in spontaneous co-speech gestures or conventional sign languages of the deaf. Thirty-two English speakers and 13 American Sign Language signers used their hands to act on, estimate the length of, and describe sticks eliciting the Müller-Lyer illusion. For both gesture and sign, the magnitude of illusion in the description task was smaller than the magnitude of illusion in the estimation task and not different from the magnitude of illusion in the action task. The mechanisms responsible for producing gesture in speech and sign thus appear to operate not on percepts involved in estimation but on percepts derived from the way we act on objects.


1999 ◽  
Vol 2 (2) ◽  
pp. 187-215 ◽  
Author(s):  
Wendy Sandler

In natural communication, the medium through which language is transmitted plays an important and systematic role. Sentences are broken up rhythmically into chunks; certain elements receive special stress; and, in spoken language, intonational tunes are superimposed onto these chunks in particular ways — all resulting in an intricate system of prosody. Investigations of prosody in Israeli Sign Language demonstrate that sign languages have comparable prosodic systems to those of spoken languages, although the phonetic medium is completely different. Evidence for the prosodic word and for the phonological phrase in ISL is examined here within the context of the relationship between the medium and the message. New evidence is offered to support the claim that facial expression in sign languages corresponds to intonation in spoken languages, and the term “superarticulation” is coined to describe this system in sign languages. Interesting formaldiffer ences between the intonationaltunes of spoken language and the “superarticulatory arrays” of sign language are shown to offer a new perspective on the relation between the phonetic basis of language, its phonological organization, and its communicative content.


Sign in / Sign up

Export Citation Format

Share Document