sign language
Recently Published Documents


TOTAL DOCUMENTS

7815
(FIVE YEARS 2373)

H-INDEX

68
(FIVE YEARS 10)

Cognition ◽  
2022 ◽  
Vol 220 ◽  
pp. 104979
Author(s):  
Gabriela Meade ◽  
Brittany Lee ◽  
Natasja Massa ◽  
Phillip J. Holcomb ◽  
Katherine J. Midgley ◽  
...  

Author(s):  
Poonam Yerpude

Abstract: Communication is very imperative for daily life. Normal people use verbal language for communication while people with disabilities use sign language for communication. Sign language is a way of communicating by using the hand gestures and parts of the body instead of speaking and listening. As not all people are familiar with sign language, there lies a language barrier. There has been much research in this field to remove this barrier. There are mainly 2 ways in which we can convert the sign language into speech or text to close the gap, i.e. , Sensor based technique,and Image processing. In this paper we will have a look at the Image processing technique, for which we will be using the Convolutional Neural Network (CNN). So, we have built a sign detector, which will recognise the sign numbers from 1 to 10. It can be easily extended to recognise other hand gestures including alphabets (A- Z) and expressions. We are creating this model based on Indian Sign Language(ISL). Keywords: Multi Level Perceptron (MLP), Convolutional Neural Network (CNN), Indian Sign Language(ISL), Region of interest(ROI), Artificial Neural Network(ANN), VGG 16(CNN vision architecture model), SGD(Stochastic Gradient Descent).


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 574
Author(s):  
Kanchon Kanti Podder ◽  
Muhammad E. H. Chowdhury ◽  
Anas M. Tahir ◽  
Zaid Bin Mahbub ◽  
Amith Khandakar ◽  
...  

A real-time Bangla Sign Language interpreter can enable more than 200 k hearing and speech-impaired people to the mainstream workforce in Bangladesh. Bangla Sign Language (BdSL) recognition and detection is a challenging topic in computer vision and deep learning research because sign language recognition accuracy may vary on the skin tone, hand orientation, and background. This research has used deep machine learning models for accurate and reliable BdSL Alphabets and Numerals using two well-suited and robust datasets. The dataset prepared in this study comprises of the largest image database for BdSL Alphabets and Numerals in order to reduce inter-class similarity while dealing with diverse image data, which comprises various backgrounds and skin tones. The papers compared classification with and without background images to determine the best working model for BdSL Alphabets and Numerals interpretation. The CNN model trained with the images that had a background was found to be more effective than without background. The hand detection portion in the segmentation approach must be more accurate in the hand detection process to boost the overall accuracy in the sign recognition. It was found that ResNet18 performed best with 99.99% accuracy, precision, F1 score, sensitivity, and 100% specificity, which outperforms the works in the literature for BdSL Alphabets and Numerals recognition. This dataset is made publicly available for researchers to support and encourage further research on Bangla Sign Language Interpretation so that the hearing and speech-impaired individuals can benefit from this research.


2022 ◽  
pp. 1-8
Author(s):  
Rosemary Ogada Luchivya ◽  
Tom Mboya Omolo ◽  
Sharon Anyango Onditi

2022 ◽  
Vol 6 ◽  
Author(s):  
Phoebe Tay ◽  
Bee Chin Ng

Singapore, a young nation with a colonial past from 1819, has seen drastic changes in the sociolinguistic landscape, which has left indelible marks on the Singapore society and the Singapore deaf community. The country has experienced many political and social transitions from British colonialism to attaining independence in 1965 and thereafter. Since independence, English-based bilingualism has been vigorously promoted as part of nation-building. While the roles of the multiple languages in use in Singapore feature prominently in the discourse on language planning, historical records show no mention of how these impacts on the deaf community. The first documented deaf person in archival documents is a Chinese deaf immigrant from Shanghai who established the first deaf school in Singapore in 1954 teaching Shanghainese Sign Language (SSL) and Mandarin. Since then, the Singapore deaf community has seen many shifts and transitions in education programming for deaf children, which has also been largely influenced by exogeneous factors such as trends in deaf education in the United States A pivotal change that has far-reaching impact on the deaf community today, is the introduction of Signing Exact English (SEE) in 1976. This was in keeping with the statal English-based bilingual narrative. The subsequent decision to replace SSL with SEE has dramatic consequences for the current members of the deaf community resulting in internal divisions and fractiousness with lasting implications for the cohesion of the community. This publication traces the origins of Singapore Sign Language (SgSL) by giving readers (and future scholars) a road map on key issues and moments in this history. Bi- and multi-lingualism in Singapore as well as external forces will also be discussed from a social and historical perspective, along with the interplay of different forms of language ideologies. All the different sign languages and sign systems as well as the written/spoken languages used in Singapore, interact and compete with as well as influence each other. There will be an exploration of how both internal factors (local language ecology) and external factors (international trends and developments in deaf education), impact on how members of the deaf community negotiate their deaf identities.


2022 ◽  
pp. 1-24
Author(s):  
Hao Lin ◽  
Yan Gu

Abstract This paper investigates the relationship between fingers and time representations in naturalistic Chinese Sign Language (CSL). Based on a CSL Corpus (Shanghai Variant, 2016–), we offer a thorough description of finger configurations for time expressions from 63 deaf signers, including three main types: digital, numeral incorporation, and points-to-fingers. The former two were further divided into vertical and horizontal fingers according to the orientation of fingertips. The results showed that there were interconnections between finger representations, numbers, ordering, and time in CSL. Vertical fingers were mainly used to quantify time units, whereas horizontal fingers were mostly used for sequencing or ordering events, and their forms could be influenced by Chinese number characters and the vertical writing direction. Furthermore, the use of points-to-fingers (e.g., pointing to the thumb, index, or little finger) formed temporal connectives in CSL and could be patterned to put a conversation in order. Additionally, CSL adopted similar linguistic forms in sequential time and adverbs of reason (e.g., cause and effect: events that happened earlier and events that happen later). Such a cause-and-effect relationship was a special type of temporal sequence. In conclusion, fingers are essential for time representation in CSL and their forms are biologically and culturally shaped.


Author(s):  
Dr. Pooja M R ◽  
◽  
Meghana M ◽  
Harshith Bhaskar ◽  
Anusha Hulatti ◽  
...  

We witness many people who face disabilities like being deaf, dumb, blind etc. They face a lot of challenges and difficulties trying to interact and communicate with others. This paper presents a new technique by providing a virtual solution without making use of any sensors. Histogram Oriented Gradient (HOG) along with Artificial Neural Network (ANN) have been implemented. The user makes use of web camera, which takes input from the user and processes the image of different gestures. The algorithm recognizes the image and identifies the pending voice input. This paper explains two way means of communication between impaired and normal people which implies that the proposed ideology can convert sign language to text and voice.


Author(s):  
Dr. Pooja M R ◽  
◽  
Meghana M ◽  
Harshith Bhaskar ◽  
Anusha Hulatti ◽  
...  

We witness many people who face disabilities like being deaf, dumb, blind etc. They face a lot of challenges and difficulties trying to interact and communicate with others. This paper presents a new technique by providing a virtual solution without making use of any sensors. Histogram Oriented Gradient (HOG) along with Artificial Neural Network (ANN) have been implemented. The user makes use of web camera, which takes input from the user and processes the image of different gestures. The algorithm recognizes the image and identifies the pending voice input. This paper explains two way means of communication between impaired and normal people which implies that the proposed ideology can convert sign language to text and voice.


Sign in / Sign up

Export Citation Format

Share Document