SOM-Based Dynamic Image Segmentation for Sign Language Training Simulator

Author(s):  
Oles Hodych ◽  
Kostiantyn Hushchyn ◽  
Yuri Shcherbyna ◽  
Iouri Nikolski ◽  
Volodymyr Pasichnyk
10.5772/14281 ◽  
2011 ◽  
Author(s):  
Kenichi Fujimoto ◽  
Mio Kobayashi ◽  
Tetsuya Yoshinag

2008 ◽  
Vol 44 (12) ◽  
pp. 727 ◽  
Author(s):  
K. Fujimoto ◽  
M. Musashi ◽  
T. Yoshinaga

Author(s):  
Victoria Adewale ◽  
Adejoke Olamiti

Introduction: Communication with the hearing impaired ( deaf/mute) people is a great challenge in our society today; this can be attributed to the fact that their means of communication (Sign Language or hand gestures at a local level) requires an interpreter at every instance. Conversion of images to text as well as speech can be of great benefit to the non-hearing impaired and hearing impaired people (the deaf/mute) from circadian interaction with images. To effectively achieve this, a sign language (ASL – American Sign Language) image to text as well as speech conversion was aimed at in this research. Methodology: The techniques of image segmentation and feature detection played a crucial role in implementing this system. We formulate the interaction between image segmentation and object recognition in the framework of FAST and SURF algorithms. The system goes through various phases such as data capturing using KINECT sensor, image segmentation, feature detection and extraction from ROI, supervised and unsupervised classification of images with K-Nearest Neighbour (KNN)-algorithms and text-to-speech (TTS) conversion. The combination FAST and SURF with a KNN of 10 also showed that unsupervised learning classification could determine the best matched feature from the existing database. In turn, the best match was converted to text as well as speech. Result: The introduced system achieved a 78% accuracy of unsupervised feature learning. Conclusion: The success of this work can be attributed to the effective classification that has improved the unsupervised feature learning of different images. The pre-determination of the ROI of each image using SURF and FAST, has demonstrated the ability of the proposed algorithm to limit image modelling to relevant region within the image.


2016 ◽  
Vol 15 (7) ◽  
pp. 6950-6956
Author(s):  
Ishita Vishnoi ◽  
Nikunj Khetan ◽  
Sreedevi Indu

Hand gestures are natural means of communication for human beings and even more so for hearing and speech impaired people who communicate through sign language. Unfortunately, most people are not familiar with sign language and an interpreter is required to translate dialogues. Hence, there is a need to develop a low cost, easily implementable and efficient means to recognize sign language gestures to eliminate the interpreter and facilitate easier communication. The proposed work achieves a satisfactory recognition accuracy using in-built laptop webcam using combination of 3 skin color models(HSV,RGB,YCbCr) and background subtraction to eliminate noise from webcam low quality images to recognize sign language for helping the hearing and speech impaired in real-time without requiring too much computational power or any other device as it can be implemented in any laptop with a webcam.


Sign in / Sign up

Export Citation Format

Share Document