scholarly journals The Signs of Silence – An Overview of Systems of Sign Languages and Co-Speech Gestures

2019 ◽  
Vol 16 (1) ◽  
pp. 123-144
Author(s):  
Emilija Mustapić ◽  
Frane Malenica

The paper presents an overview of sign languages and co-speech gestures as two means of communication realised through the visuo-spatial modality. We look at previous research to examine the correlation between spoken and sign language phonology, but also provide an insight into the basic features of co-speech gestures. By analysing these features, we are able to see how these means of communication utilise phases of production (in the case of gestures) or parts of individual signs (in the case of sign languages) to convey or complement the meaning. Recent insights into sign languages as bona fide linguistic systems and co-speech gestures as a system which has no linguistic features but accompanies spoken language have shown that communication does not take place within just a single modality but is rather multimodal. By comparing gestures and sign languages to spoken languages, we are able to trace the transition from systems of communication involving simple form-meaning pairings to fully fledged morphological and syntactic complexities in spoken and sign languages, which gives us a new outlook on the emergence of linguistic phenomena.

1999 ◽  
Vol 2 (2) ◽  
pp. 187-215 ◽  
Author(s):  
Wendy Sandler

In natural communication, the medium through which language is transmitted plays an important and systematic role. Sentences are broken up rhythmically into chunks; certain elements receive special stress; and, in spoken language, intonational tunes are superimposed onto these chunks in particular ways — all resulting in an intricate system of prosody. Investigations of prosody in Israeli Sign Language demonstrate that sign languages have comparable prosodic systems to those of spoken languages, although the phonetic medium is completely different. Evidence for the prosodic word and for the phonological phrase in ISL is examined here within the context of the relationship between the medium and the message. New evidence is offered to support the claim that facial expression in sign languages corresponds to intonation in spoken languages, and the term “superarticulation” is coined to describe this system in sign languages. Interesting formaldiffer ences between the intonationaltunes of spoken language and the “superarticulatory arrays” of sign language are shown to offer a new perspective on the relation between the phonetic basis of language, its phonological organization, and its communicative content.


2018 ◽  
Vol 44 (3-4) ◽  
pp. 123-208 ◽  
Author(s):  
Philippe Schlenker

AbstractWhile it is now accepted that sign languages should inform and constrain theories of ‘Universal Grammar’, their role in ‘Universal Semantics’ has been under-studied. We argue that they have a crucial role to play in the foundations of semantics, for two reasons. First, in some casessign languages provide overt evidence on crucial aspects of the Logical Form of sentences, ones that are only inferred indirectly in spoken language. For instance, sign language ‘loci’ are positions in signing space that can arguably realize logical variables, and the fact that they are overt makes it possible to revisit foundational debates about the syntactic reality of variables, about mechanisms of temporal and modal anaphora, and about the existence of dynamic binding. Another example pertains to mechanisms of ‘context shift’, which were postulated on the basis of indirect evidence in spoken language, but which are arguably overt in sign language. Second, along one dimensionsign languages are strictly more expressive than spoken languagesbecause iconic phenomena can be found at their logical core. This applies to loci themselves, which maysimultaneouslyfunction as logical variables and as schematic pictures of what they denote (context shift comes with some iconic requirements as well). As a result, the semantic system of spoken languages can in some respects be seen as a simplified version of the richer semantics found in sign languages. Two conclusions could be drawn from this observation. One is that the full extent of Universal Semantics can only be studied in sign languages. An alternative possibility is that spoken languages have comparable expressive mechanisms, but only when co-speech gestures are taken into account (as recently argued by Goldin-Meadow and Brentari). Either way, sign languages have a crucial role to play in investigations of the foundations of semantics.


2019 ◽  
Vol 39 (4) ◽  
pp. 367-395 ◽  
Author(s):  
Matthew L. Hall ◽  
Wyatte C. Hall ◽  
Naomi K. Caselli

Deaf and Hard of Hearing (DHH) children need to master at least one language (spoken or signed) to reach their full potential. Providing access to a natural sign language supports this goal. Despite evidence that natural sign languages are beneficial to DHH children, many researchers and practitioners advise families to focus exclusively on spoken language. We critique the Pediatrics article ‘Early Sign Language Exposure and Cochlear Implants’ (Geers et al., 2017) as an example of research that makes unsupported claims against the inclusion of natural sign languages. We refute claims that (1) there are harmful effects of sign language and (2) that listening and spoken language are necessary for optimal development of deaf children. While practical challenges remain (and are discussed) for providing a sign language-rich environment, research evidence suggests that such challenges are worth tackling in light of natural sign languages providing a host of benefits for DHH children – especially in the prevention and reduction of language deprivation.


2017 ◽  
Vol 20 (1) ◽  
pp. 109-128 ◽  
Author(s):  
Ana Mineiro ◽  
Patrícia Carmo ◽  
Cristina Caroça ◽  
Mara Moita ◽  
Sara Carvalho ◽  
...  

Abstract In Sao Tome and Principe there are approximately five thousand deaf and hard-of-hearing individuals. Until recently, these people had no language to use among them other than basic home signs used only to communicate with their families. With this communication gap in mind, a project was set up to help them come together in a common space in order to create a dedicated environment for a common sign language to emerge. In less than two years, the first cohort began to sign and to develop a newly emerging sign language – the Sao Tome and Principe Sign Language (LGSTP). Signs were elicited by means of drawings and pictures and recorded from the beginning of the project. The emergent structures of signs in this new language were compared with those reported for other emergent sign languages such as the Al-Sayyid Bedouin Sign Language and the Lengua de Señas de Nicaragua, and several similarities were found at the first stage. In this preliminary study on the emergence of LGSTP, it was observed that, in its first stage, signs are mostly iconic and exhibit a greater involvement of the articulators and a larger signing space when compared with subsequent stages of LGSTP emergence and with other sign languages. Although holistic signs are the prevalent structure, compounding seems to be emerging. At this stage of emergence, OSV seems to be the predominant syntactic structure of LGSTP. Yet the data suggest that new signers exhibit difficulties in syntactic constructions with two arguments.


2020 ◽  
Vol 6 (1) ◽  
pp. 89-118
Author(s):  
Nick Palfreyman

Abstract Abstract (International Sign) In contrast to sociolinguistic research on spoken languages, little attention has been paid to how signers employ variation as a resource to fashion social meaning. This study focuses on an extremely understudied social practice, that of sign language usage in Indonesia, and asks where one might look to find socially meaningful variables. Using spontaneous data from a corpus of BISINDO (Indonesian Sign Language), it blends methodologies from Labovian variationism and analytic practices from the ‘third wave’ with a discursive approach to investigate how four variable linguistic features are used to express social identities. These features occur at different levels of linguistic organisation, from the phonological to the lexical and the morphosyntactic, and point to identities along regional and ethnic lines, as well as hearing status. In applying third wave practices to sign languages, constructed action and mouthings in particular emerge as potent resources for signers to make social meaning.


Sign language is the only method of communication for the hearing and speech impaired people around the world. Most of the speech and hearing-impaired people understand single sign language. Thus, there is an increasing demand for sign language interpreters. For regular people learning sign language is difficult, and for speech and hearing-impaired person, learning spoken language is impossible. There is a lot of research being done in the domain of automatic sign language recognition. Different methods such as, computer vision, data glove, depth sensors can be used to train a computer to interpret sign language. The interpretation is being done from sign to text, text to sign, speech to sign and sign to speech. Different countries use different sign languages, the signers of different sign languages are unable to communicate with each other. Analyzing the characteristic features of gestures provides insights about the sign language, some common features in sign languages gestures will help in designing a sign language recognition system. This type of system will help in reducing the communication gap between sign language users and spoken language users.


2020 ◽  
pp. 016502542095819
Author(s):  
Julia Krebs ◽  
Dietmar Roehm ◽  
Ronnie B. Wilbur ◽  
Evie A. Malaia

Acquisition of natural language has been shown to fundamentally impact both one’s ability to use the first language and the ability to learn subsequent languages later in life. Sign languages offer a unique perspective on this issue because Deaf signers receive access to signed input at varying ages. The majority acquires sign language in (early) childhood, but some learn sign language later—a situation that is drastically different from that of spoken language acquisition. To investigate the effect of age of sign language acquisition and its potential interplay with age in signers, we examined grammatical acceptability ratings and reaction time measures in a group of Deaf signers (age range = 28–58 years) with early (0–3 years) or later (4–7 years) acquisition of sign language in childhood. Behavioral responses to grammatical word order variations (subject–object–verb [SOV] vs. object–subject–verb [OSV]) were examined in sentences that included (1) simple sentences, (2) topicalized sentences, and (3) sentences involving manual classifier constructions, uniquely characteristic of sign languages. Overall, older participants responded more slowly. Age of acquisition had subtle effects on acceptability ratings, whereby the direction of the effect depended on the specific linguistic structure.


PEDIATRICS ◽  
1994 ◽  
Vol 93 (1) ◽  
pp. A62-A62

Just as no one can pinpoint the origins of spoken language in prehistory, the roots of sign language remain hidden from view. What linguists do know is that sign languages have sprung up independently in many different places. Signing probably began with simple gestures, but then evolved into a true language with structured grammar. "In every place we've ever found deaf people, there's sign," says anthropological linguist Bob Johnson. But it's not the same language. "I went to a Mayan village where, out of 400 people, 13 were deaf, and they had their own Mayan Sign - I'd guess it's been maintained for thousands of years." Today at least 50 native sign languages are "spoken" worldwide, all mutually incomprehensible, from British and Israeli Sign to Chinese Sign.


Gesture ◽  
2014 ◽  
Vol 14 (3) ◽  
pp. 263-296 ◽  
Author(s):  
Luke Fleming

With the exception of Plains Indian Sign Language and Pacific Northwest sawmill sign languages, highly developed alternate sign languages (sign languages typically employed by and for the hearing) share not only common structural linguistic features, but their use is also characterized by convergent ideological commitments concerning communicative medium and linguistic modality. Though both modalities encode comparable denotational content, speaker-signers tend to understand manual-visual sign as a pragmatically appropriate substitute for oral-aural speech. This paper suggests that two understudied clusters of alternate sign languages, Armenian and Cape York Peninsula sign languages, offer a general model for the development of alternate sign languages, one in which the gesture-to-sign continuum is dialectically linked to hypertrophied forms of interactional avoidance up-to-and-including complete silence in the co-presence of affinal relations. These cases illustrate that the pragmatic appropriateness of sign over speech relies upon local semiotic ideologies which tend to conceptualize the manual-visual linguistic modality on analogy to the gestural communication employed in interactional avoidance, and thus as not counting as true language.


2021 ◽  
Vol 8 (3) ◽  
pp. 110-132
Author(s):  
Khunaw Sulaiman Pirot ◽  
Wrya Izaddin Ali

This paper entitled ‘The Common Misconceptions about Sign Language’ is concerned with the most common misconceptions about sign language. It also deals with sign language and its relation with the spoken language. Sign language, primarily used by deaf people, is a fully-developed human language that does not use sounds for communication, but it is a visual-gestural system that uses hands, body and facial gestures. One of the misconceptions is that all sign languages are the same in the worldwide. Such assumptions cause problems. Accordingly, some questions have been raised: first, is sign language universal? Second, is sign language based on spoken language? And third, is sign language invented by hearing people?      The aim of the paper is to have a deeper understanding about sign language. It also demonstrates the similarities and differences between the two different modalities: sign language and spoken language. The paper is based on some hypothesis. One of the hypotheses is that sign languages are pantomimes and gestures. It also hypothesizes that the process of language acquisition in sign language for deaf people is different from the language acquisition in spoken language for hearing people.     To answer the questions raised, the qualitative approach is adopted. The procedure is to collect data about the subject from books and articles and then analyze the data to obtain the aim of the study.  One of the conclusions is that sign language is not universal. It is recommended that more work can be carried out on the differences between either American Sign Language (ASL) or British Sign Language (BSL) with reference to zmânî âmâžaî kurdî (ZAK) Kurdish Sign Language) at all linguistic levels.   


Sign in / Sign up

Export Citation Format

Share Document