scholarly journals Two languages at hand

2015 ◽  
Vol 18 (1) ◽  
pp. 90-131 ◽  
Author(s):  
Ulrike Zeshan ◽  
Sibaji Panda

This article explores patterns of co-use of two sign languages in casual conversational data from four deaf bilinguals, who are fluent in Indian Sign Language (ISL) and Burundi Sign Language (BuSL). We investigate the contributions that both sign languages make to these conversations at lexical, clause, and discourse level, including a distinction between signs from closed grammatical classes and open lexical classes. The results show that despite individual differences between signers, there are also striking commonalities. Specifically, we demonstrate the shared characteristics of the signers’ bilingual outputs in the domains of negation, where signers prefer negators found in both sign languages, and wh-questions, where signers choose BuSL for specific question words and ISL for general wh-questions. The article thus makes the argument that these signers have developed a fairly stable bilingual variety that is characteristic of this particular community of practice, and we explore theoretical implications arising from these patterns.

2006 ◽  
Vol 9 (1-2) ◽  
pp. 133-150 ◽  
Author(s):  
Katharina Schalber

The aim of this paper is to investigate the structure of polar (yes/no questions) and content questions (wh-questions) in Austrian Sign Language (ÖGS), analyzing the different nonmanual signals, the occurrence of question signs and their syntactic position. As I will show, the marking strategies used in ÖGS are no exception to the crosslinguistic observations that interrogative constructions in sign languages employ a variety of nonmanual signals and manual signs (Zeshan 2004). In ÖGS polar questions are marked with ‘chin down’, whereas content questions are indicated with ‘chin up’ or ‘head forward’ and content question signs. These same nonmanual markers are reported for Croatian sign language, indicating common foundation due to historical relations and intense language contact.


There are a lot of people who have many disabilities in our world out of which,people who are deaf and dumb cannot convey there messages to the normal people. Conversation becomes very difficult for this people. Deaf people cannot understand and hear what normal people is going to convey ,similarly dumb people need to convey their message using sign languages where normal people cannot understand unless he/she knows or understands the sign language. This brings to a need of an application which can be useful for having conversation between deaf,dumb and normal people. Here we are using hand gestures of Indian sign language (ISL) which contain all the alphabets and 0-9 digit gestures. The dataset of alphabets and digits is created by us.After dataset building we extracted the features using bagof- words and image preprocessing.With the feature extraction, histograms are been generated which maps alphabets to images. Finally, these features are fed to the supervised machine learning model to predict the gesture/sign. We did also use CNN model for training the model.


Gesture ◽  
2014 ◽  
Vol 14 (3) ◽  
pp. 263-296 ◽  
Author(s):  
Luke Fleming

With the exception of Plains Indian Sign Language and Pacific Northwest sawmill sign languages, highly developed alternate sign languages (sign languages typically employed by and for the hearing) share not only common structural linguistic features, but their use is also characterized by convergent ideological commitments concerning communicative medium and linguistic modality. Though both modalities encode comparable denotational content, speaker-signers tend to understand manual-visual sign as a pragmatically appropriate substitute for oral-aural speech. This paper suggests that two understudied clusters of alternate sign languages, Armenian and Cape York Peninsula sign languages, offer a general model for the development of alternate sign languages, one in which the gesture-to-sign continuum is dialectically linked to hypertrophied forms of interactional avoidance up-to-and-including complete silence in the co-presence of affinal relations. These cases illustrate that the pragmatic appropriateness of sign over speech relies upon local semiotic ideologies which tend to conceptualize the manual-visual linguistic modality on analogy to the gestural communication employed in interactional avoidance, and thus as not counting as true language.


n our society, it is very difficult for hearing impaired and speech impaired people to communicate with ordinary people. They use sign languages to communicate, which use visually transmitted sign patterns, generally includes hand gestures. Sign languages being difficult to learn and non-universal, there is a barrier of communication between the hearing impaired and ordinary people. To break this barrier a system is required that can convert sign language to voice and vice versa in real-time. Here, we propose a real-time two-way system, for communication between hearing-impaired and normal people, which converts the Indian Sign Language (ISL) letters into equivalent alphabet letters and vice versa. In the proposed system, using a camera, images of ISL hand gestures are captured. Then Image pre-processing is done so that these images are ready for feature extraction. Here, a novel approach of using the Canny Edge Detection Algorithm. Once the necessary details are extracted from the image, it is matched with the data set, which is classified using Convolutional Neural Network, and the corresponding text is generated. This text is converted into a voice. Similarly, using a microphone, the voice input of an ordinary person is captured and converted into text. This text is then matched with the data set and a corresponding sign is generated. This system reduces the gap in communication between hearing-impaired and ordinary people. Our method provides 98 % accuracy for the 35 alphanumeric gestures of ISL


Author(s):  
Mikhail G. Grif ◽  
◽  
R. Elakkiya ◽  
Alexey L. Prikhodko ◽  
Maxim А. Bakaev ◽  
...  

In the paper, we consider recognition of sign languages (SL) with a particular focus on Russian and Indian SLs. The proposed recognition system includes five components: configuration, orientation, localization, movement and non-manual markers. The analysis uses methods of recognition of individual gestures and continuous sign speech for Indian and Russian sign languages (RSL). To recognize individual gestures, the RSL Dataset was developed, which includes more than 35,000 files for over 1000 signs. Each sign was performed with 5 repetitions and at least by 5 deaf native speakers of the Russian Sign Language from Siberia. To isolate epenthesis for continuous RSL, 312 sentences with 5 repetitions were selected and recorded on video. Five types of movements were distinguished, namely, "No gesture", "There is a gesture", "Initial movement", "Transitional movement", "Final movement". The markup of sentences for highlighting epenthesis was carried out on the Supervisely.ly platform. A recurrent network architecture (LSTM) was built, implemented using the TensorFlow Keras machine learning library. The accuracy of correct recognition of epenthesis was 95 %. The work on a similar dataset for the recognition of both individual gestures and continuous Indian sign language (ISL) is continuing. To recognize hand gestures, the mediapipe holistic library module was used. It contains a group of trained neural network algorithms that allow obtaining the coordinates of the key points of the body, palms and face of a person in the image. The accuracy of 85 % was achieved for the verification data. In the future, it is necessary to significantly increase the amount of labeled data. To recognize non-manual components, a number of rules have been developed for certain movements in the face. These rules include positions for the eyes, eyelids, mouth, tongue, and head tilt.


Sign languages are visual languages that use hand, facial and body movements as a means of communication. There are over 135 different sign languages all around the world including American Sign Language (ASL), Indian Sign Language (ISL) and British Sign Language (BSL). Sign language is commonly used as the main form of communication for people who are Deaf or hard of hearing, but sign languages also have a lot to offer for everyone. In our proposed system, we are creating a Web Application which contains two modules: The first module will accept the Information in Natural Language (Input Text) and it will show the corresponding Information in Sign Language Images (GIF Format). The second module will accept the Information in Sign Language (Input Hand Gesture of any ASL Letter) and it will detect the Letter and display it as the Output (Text). The system is built to bridge the communication gap between deaf-mute people and regular people as those who don’t know the American Sign Language can either use it to learn the Sign Language or to communicate with someone who knows the Sign Language. This approach will help users in quick communication without having to wait for any human interpreter to translate the Sign Language. The application is developed using Django and Flask frameworks and it includes NLP and Neural Network. We are focusing on improving the Living standards of the hearing impaired people as it can be very difficult to perform everyday tasks especially when people around them don’t know Sign Language. This application can also be used as a teaching tool for relatives and friends of deaf people as well as people interested in learning the sign language.


With the advent of new technology every year, human beings continue to make clever innovations to benefit not only themselves but also those with some kind of impairment. Communication is carried out by talking to each other for regular people, but people who are deaf interact with each other through sign language. Taking this problem into account, we are proposing a methodology that allows to ease the communication with each other by translating speech into sign language. This paper explains a methodology that translates speech into the corresponding Indian Sign Language (ISL). In India, it is spoken in almost 28 different languages. So, language has always been a problem. Thus, we have come with a project just for India in which the person can communicate with the app in any Indian language they know, and it will convert it into Indian Sign Language. This is applicable to not just literate but also illiterate people across India. The idea is to take the speech input and translate to text, which will then undergo text-pre-processing using NLP for better analysis and will be connected to the HamNoSys data for the generation of sign languages. The polarity detection will also be included. It is implemented using the SVM algorithm for Sentimental Analysis. Thus, the main objective of this project is to develop a useful project which can be used to capture the whole vocabulary of Indian Sign Language (ISL) and provide access to information and services to mute people in ISL.


2006 ◽  
Vol 9 (1-2) ◽  
pp. 133-150
Author(s):  
Katharina Schalber

The aim of this paper is to investigate the structure of polar (yes/no questions) and content questions (wh-questions) in Austrian Sign Language (ÖGS), analyzing the different nonmanual signals, the occurrence of question signs and their syntactic position. As I will show, the marking strategies used in ÖGS are no exception to the crosslinguistic observations that interrogative constructions in sign languages employ a variety of nonmanual signals and manual signs (Zeshan 2004). In ÖGS polar questions are marked with ‘chin down’, whereas content questions are indicated with ‘chin up’ or ‘head forward’ and content question signs. These same nonmanual markers are reported for Croatian sign language, indicating common foundation due to historical relations and intense language contact.


Sign in / Sign up

Export Citation Format

Share Document