Negating speech

Gesture ◽  
2014 ◽  
Vol 14 (3) ◽  
pp. 263-296 ◽  
Author(s):  
Luke Fleming

With the exception of Plains Indian Sign Language and Pacific Northwest sawmill sign languages, highly developed alternate sign languages (sign languages typically employed by and for the hearing) share not only common structural linguistic features, but their use is also characterized by convergent ideological commitments concerning communicative medium and linguistic modality. Though both modalities encode comparable denotational content, speaker-signers tend to understand manual-visual sign as a pragmatically appropriate substitute for oral-aural speech. This paper suggests that two understudied clusters of alternate sign languages, Armenian and Cape York Peninsula sign languages, offer a general model for the development of alternate sign languages, one in which the gesture-to-sign continuum is dialectically linked to hypertrophied forms of interactional avoidance up-to-and-including complete silence in the co-presence of affinal relations. These cases illustrate that the pragmatic appropriateness of sign over speech relies upon local semiotic ideologies which tend to conceptualize the manual-visual linguistic modality on analogy to the gestural communication employed in interactional avoidance, and thus as not counting as true language.

2017 ◽  
Vol 20 (1) ◽  
pp. 109-128 ◽  
Author(s):  
Ana Mineiro ◽  
Patrícia Carmo ◽  
Cristina Caroça ◽  
Mara Moita ◽  
Sara Carvalho ◽  
...  

Abstract In Sao Tome and Principe there are approximately five thousand deaf and hard-of-hearing individuals. Until recently, these people had no language to use among them other than basic home signs used only to communicate with their families. With this communication gap in mind, a project was set up to help them come together in a common space in order to create a dedicated environment for a common sign language to emerge. In less than two years, the first cohort began to sign and to develop a newly emerging sign language – the Sao Tome and Principe Sign Language (LGSTP). Signs were elicited by means of drawings and pictures and recorded from the beginning of the project. The emergent structures of signs in this new language were compared with those reported for other emergent sign languages such as the Al-Sayyid Bedouin Sign Language and the Lengua de Señas de Nicaragua, and several similarities were found at the first stage. In this preliminary study on the emergence of LGSTP, it was observed that, in its first stage, signs are mostly iconic and exhibit a greater involvement of the articulators and a larger signing space when compared with subsequent stages of LGSTP emergence and with other sign languages. Although holistic signs are the prevalent structure, compounding seems to be emerging. At this stage of emergence, OSV seems to be the predominant syntactic structure of LGSTP. Yet the data suggest that new signers exhibit difficulties in syntactic constructions with two arguments.


2020 ◽  
Vol 6 (1) ◽  
pp. 89-118
Author(s):  
Nick Palfreyman

Abstract Abstract (International Sign) In contrast to sociolinguistic research on spoken languages, little attention has been paid to how signers employ variation as a resource to fashion social meaning. This study focuses on an extremely understudied social practice, that of sign language usage in Indonesia, and asks where one might look to find socially meaningful variables. Using spontaneous data from a corpus of BISINDO (Indonesian Sign Language), it blends methodologies from Labovian variationism and analytic practices from the ‘third wave’ with a discursive approach to investigate how four variable linguistic features are used to express social identities. These features occur at different levels of linguistic organisation, from the phonological to the lexical and the morphosyntactic, and point to identities along regional and ethnic lines, as well as hearing status. In applying third wave practices to sign languages, constructed action and mouthings in particular emerge as potent resources for signers to make social meaning.


There are a lot of people who have many disabilities in our world out of which,people who are deaf and dumb cannot convey there messages to the normal people. Conversation becomes very difficult for this people. Deaf people cannot understand and hear what normal people is going to convey ,similarly dumb people need to convey their message using sign languages where normal people cannot understand unless he/she knows or understands the sign language. This brings to a need of an application which can be useful for having conversation between deaf,dumb and normal people. Here we are using hand gestures of Indian sign language (ISL) which contain all the alphabets and 0-9 digit gestures. The dataset of alphabets and digits is created by us.After dataset building we extracted the features using bagof- words and image preprocessing.With the feature extraction, histograms are been generated which maps alphabets to images. Finally, these features are fed to the supervised machine learning model to predict the gesture/sign. We did also use CNN model for training the model.


2019 ◽  
Vol 16 (1) ◽  
pp. 123-144
Author(s):  
Emilija Mustapić ◽  
Frane Malenica

The paper presents an overview of sign languages and co-speech gestures as two means of communication realised through the visuo-spatial modality. We look at previous research to examine the correlation between spoken and sign language phonology, but also provide an insight into the basic features of co-speech gestures. By analysing these features, we are able to see how these means of communication utilise phases of production (in the case of gestures) or parts of individual signs (in the case of sign languages) to convey or complement the meaning. Recent insights into sign languages as bona fide linguistic systems and co-speech gestures as a system which has no linguistic features but accompanies spoken language have shown that communication does not take place within just a single modality but is rather multimodal. By comparing gestures and sign languages to spoken languages, we are able to trace the transition from systems of communication involving simple form-meaning pairings to fully fledged morphological and syntactic complexities in spoken and sign languages, which gives us a new outlook on the emergence of linguistic phenomena.


n our society, it is very difficult for hearing impaired and speech impaired people to communicate with ordinary people. They use sign languages to communicate, which use visually transmitted sign patterns, generally includes hand gestures. Sign languages being difficult to learn and non-universal, there is a barrier of communication between the hearing impaired and ordinary people. To break this barrier a system is required that can convert sign language to voice and vice versa in real-time. Here, we propose a real-time two-way system, for communication between hearing-impaired and normal people, which converts the Indian Sign Language (ISL) letters into equivalent alphabet letters and vice versa. In the proposed system, using a camera, images of ISL hand gestures are captured. Then Image pre-processing is done so that these images are ready for feature extraction. Here, a novel approach of using the Canny Edge Detection Algorithm. Once the necessary details are extracted from the image, it is matched with the data set, which is classified using Convolutional Neural Network, and the corresponding text is generated. This text is converted into a voice. Similarly, using a microphone, the voice input of an ordinary person is captured and converted into text. This text is then matched with the data set and a corresponding sign is generated. This system reduces the gap in communication between hearing-impaired and ordinary people. Our method provides 98 % accuracy for the 35 alphanumeric gestures of ISL


Author(s):  
Mikhail G. Grif ◽  
◽  
R. Elakkiya ◽  
Alexey L. Prikhodko ◽  
Maxim А. Bakaev ◽  
...  

In the paper, we consider recognition of sign languages (SL) with a particular focus on Russian and Indian SLs. The proposed recognition system includes five components: configuration, orientation, localization, movement and non-manual markers. The analysis uses methods of recognition of individual gestures and continuous sign speech for Indian and Russian sign languages (RSL). To recognize individual gestures, the RSL Dataset was developed, which includes more than 35,000 files for over 1000 signs. Each sign was performed with 5 repetitions and at least by 5 deaf native speakers of the Russian Sign Language from Siberia. To isolate epenthesis for continuous RSL, 312 sentences with 5 repetitions were selected and recorded on video. Five types of movements were distinguished, namely, "No gesture", "There is a gesture", "Initial movement", "Transitional movement", "Final movement". The markup of sentences for highlighting epenthesis was carried out on the Supervisely.ly platform. A recurrent network architecture (LSTM) was built, implemented using the TensorFlow Keras machine learning library. The accuracy of correct recognition of epenthesis was 95 %. The work on a similar dataset for the recognition of both individual gestures and continuous Indian sign language (ISL) is continuing. To recognize hand gestures, the mediapipe holistic library module was used. It contains a group of trained neural network algorithms that allow obtaining the coordinates of the key points of the body, palms and face of a person in the image. The accuracy of 85 % was achieved for the verification data. In the future, it is necessary to significantly increase the amount of labeled data. To recognize non-manual components, a number of rules have been developed for certain movements in the face. These rules include positions for the eyes, eyelids, mouth, tongue, and head tilt.


2015 ◽  
Vol 18 (1) ◽  
pp. 90-131 ◽  
Author(s):  
Ulrike Zeshan ◽  
Sibaji Panda

This article explores patterns of co-use of two sign languages in casual conversational data from four deaf bilinguals, who are fluent in Indian Sign Language (ISL) and Burundi Sign Language (BuSL). We investigate the contributions that both sign languages make to these conversations at lexical, clause, and discourse level, including a distinction between signs from closed grammatical classes and open lexical classes. The results show that despite individual differences between signers, there are also striking commonalities. Specifically, we demonstrate the shared characteristics of the signers’ bilingual outputs in the domains of negation, where signers prefer negators found in both sign languages, and wh-questions, where signers choose BuSL for specific question words and ISL for general wh-questions. The article thus makes the argument that these signers have developed a fairly stable bilingual variety that is characteristic of this particular community of practice, and we explore theoretical implications arising from these patterns.


Sign languages are visual languages that use hand, facial and body movements as a means of communication. There are over 135 different sign languages all around the world including American Sign Language (ASL), Indian Sign Language (ISL) and British Sign Language (BSL). Sign language is commonly used as the main form of communication for people who are Deaf or hard of hearing, but sign languages also have a lot to offer for everyone. In our proposed system, we are creating a Web Application which contains two modules: The first module will accept the Information in Natural Language (Input Text) and it will show the corresponding Information in Sign Language Images (GIF Format). The second module will accept the Information in Sign Language (Input Hand Gesture of any ASL Letter) and it will detect the Letter and display it as the Output (Text). The system is built to bridge the communication gap between deaf-mute people and regular people as those who don’t know the American Sign Language can either use it to learn the Sign Language or to communicate with someone who knows the Sign Language. This approach will help users in quick communication without having to wait for any human interpreter to translate the Sign Language. The application is developed using Django and Flask frameworks and it includes NLP and Neural Network. We are focusing on improving the Living standards of the hearing impaired people as it can be very difficult to perform everyday tasks especially when people around them don’t know Sign Language. This application can also be used as a teaching tool for relatives and friends of deaf people as well as people interested in learning the sign language.


Sign in / Sign up

Export Citation Format

Share Document