Head Tracking and Hand Segmentation during Hand over Face Occlusion in Sign Language

Author(s):  
Matilde Gonzalez ◽  
Christophe Collet ◽  
Rémi Dubot
2020 ◽  
Vol 17 (4) ◽  
pp. 1764-1769
Author(s):  
S. Gobhinath ◽  
T. Vignesh ◽  
R. Pavankumar ◽  
R. Kishore ◽  
K. S. Koushik

This paper presents about an overview on several methods of segmentation techniques for hand gesture recognition. Hand gesture recognition has evolved tremendously in the recent years because of its ability to interact with machine. Mankind tries to incorporate human gestures into modern technologies like touching movement on screen, virtual reality gaming and sign language prediction. This research aims towards employed on hand gesture recognition for sign language interpretation as a human computer interaction application. Sign Language which uses transmits the sign patterns to convey meaning by hand shapes, orientation and movements to fluently express their thoughts with other person and is normally used by the physically challenged people who cannot speak or hear. Automatic Sign Language which requires robust and accurate techniques for identifying hand signs or a sequence of produced gesture to help interpret their correct meaning. Hand segmentation algorithm where segmentation using different hand detection schemes with required morphological processing. There are many methods which can be used to acquire the respective results depending on its advantage.


2017 ◽  
Vol 26 (3) ◽  
pp. 471-481 ◽  
Author(s):  
Ananya Choudhury ◽  
Anjan Kumar Talukdar ◽  
Manas Kamal Bhuyan ◽  
Kandarpa Kumar Sarma

AbstractAutomatic sign language recognition (SLR) is a current area of research as this is meant to serve as a substitute for sign language interpreters. In this paper, we present the design of a continuous SLR system that can extract out the meaningful signs and consequently recognize them. Here, we have used height of the hand trajectory as a salient feature for separating out the meaningful signs from the movement epenthesis patterns. Further, we have incorporated a unique set of spatial and temporal features for efficient recognition of the signs encapsulated within the continuous sequence. The implementation of an efficient hand segmentation and hand tracking technique makes our system robust to complex background as well as background with multiple signers. Experiments have established that our proposed system can identify signs from a continuous sign stream with a 92.8% spotting rate.


Symmetry ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 262
Author(s):  
Thongpan Pariwat ◽  
Pusadee Seresangtakul

Sign language is a type of language for the hearing impaired that people in the general public commonly do not understand. A sign language recognition system, therefore, represents an intermediary between the two sides. As a communication tool, a multi-stroke Thai finger-spelling sign language (TFSL) recognition system featuring deep learning was developed in this study. This research uses a vision-based technique on a complex background with semantic segmentation performed with dilated convolution for hand segmentation, hand strokes separated using optical flow, and learning feature and classification done with convolution neural network (CNN). We then compared the five CNN structures that define the formats. The first format was used to set the number of filters to 64 and the size of the filter to 3 × 3 with 7 layers; the second format used 128 filters, each filter 3 × 3 in size with 7 layers; the third format used the number of filters in ascending order with 7 layers, all of which had an equal 3 × 3 filter size; the fourth format determined the number of filters in ascending order and the size of the filter based on a small size with 7 layers; the final format was a structure based on AlexNet. As a result, the average accuracy was 88.83%, 87.97%, 89.91%, 90.43%, and 92.03%, respectively. We implemented the CNN structure based on AlexNet to create models for multi-stroke TFSL recognition systems. The experiment was performed using an isolated video of 42 Thai alphabets, which are divided into three categories consisting of one stroke, two strokes, and three strokes. The results presented an 88.00% average accuracy for one stroke, 85.42% for two strokes, and 75.00% for three strokes.


Author(s):  
Love Jhoye M. Raboy ◽  
Jan Rey D. Canlas ◽  
Christy Ann R. Renejane ◽  
Carren J. Mojica ◽  
Annajane S. Bandiala

American Sign Language is used by the deaf-mute community in expressing their thoughts. Many applications have been developed to assist the deaf-mute person, but most of the applications integrate on a computer or on desktop. These leads to create a mobile application that will recognize hand pose into text. American Sign Language Alphabet Translator Android Application: Hand Shapes into Text is a mobile application that translates American Sign Language Alphabet into text. These applications translate the ASL alphabet to a person who is not capable of understanding the language of a deaf-mute person. The application used the process of hand segmentation and feature extraction to get the information needed to extract the hand and data set that used to match the hand pose to display the equivalent letters. The study is successful in recognizing most of the alphabet handshape as long as it is well detected.


2012 ◽  
Vol 21 (1) ◽  
pp. 11-16 ◽  
Author(s):  
Susan Fager ◽  
Tom Jakobs ◽  
David Beukelman ◽  
Tricia Ternus ◽  
Haylee Schley

Abstract This article summarizes the design and evaluation of a new augmentative and alternative communication (AAC) interface strategy for people with complex communication needs and severe physical limitations. This strategy combines typing, gesture recognition, and word prediction to input text into AAC software using touchscreen or head movement tracking access methods. Eight individuals with movement limitations due to spinal cord injury, amyotrophic lateral sclerosis, polio, and Guillain Barre syndrome participated in the evaluation of the prototype technology using a head-tracking device. Fourteen typical individuals participated in the evaluation of the prototype using a touchscreen.


2017 ◽  
Vol 2 (12) ◽  
pp. 81-88
Author(s):  
Sandy K. Bowen ◽  
Silvia M. Correa-Torres

America's population is more diverse than ever before. The prevalence of students who are culturally and/or linguistically diverse (CLD) has been steadily increasing over the past decade. The changes in America's demographics require teachers who provide services to students with deafblindness to have an increased awareness of different cultures and diversity in today's classrooms, particularly regarding communication choices. Children who are deafblind may use spoken language with appropriate amplification, sign language or modified sign language, and/or some form of augmentative and alternative communication (AAC).


Sign in / Sign up

Export Citation Format

Share Document