Investigation of Sign Language Recognition Performance by Integration of Multiple Feature Elements and Classifiers

Author(s):  
Tatsunori Ozawa ◽  
Yuna Okayasu ◽  
Maitai Dahlan ◽  
Hiromitsu Nishimura ◽  
Hiroshi Tanaka
Author(s):  
WEN GAO ◽  
JIYONG MA ◽  
JIANGQIN WU ◽  
CHUNLI WANG

In this paper, a system designed for helping the deaf to communicate with others is presented. Some useful new ideas are proposed in design and implementation. An algorithm based on geometrical analysis for the purpose of extracting invariant feature to signer position is presented. An ANN–DP combined approach is employed for segmenting subwords automatically from the data stream of sign signals. To tackle the epenthesis movement problem, a DP-based method has been used to obtain the context-dependent models. Some techniques for system implementation are also given, including fast matching, frame prediction and search algorithms. The implemented system is able to recognize continuous large vocabulary Chinese Sign Language. Experiments show that proposed techniques in this paper are efficient on either recognition speed or recognition performance.


2021 ◽  
Vol 14 (4) ◽  
pp. 1-33
Author(s):  
Saad Hassan ◽  
Oliver Alonzo ◽  
Abraham Glasser ◽  
Matt Huenerfauth

Advances in sign-language recognition technology have enabled researchers to investigate various methods that can assist users in searching for an unfamiliar sign in ASL using sign-recognition technology. Users can generate a query by submitting a video of themselves performing the sign they believe they encountered somewhere and obtain a list of possible matches. However, there is disagreement among developers of such technology on how to report the performance of their systems, and prior research has not examined the relationship between the performance of search technology and users’ subjective judgements for this task. We conducted three studies using a Wizard-of-Oz prototype of a webcam-based ASL dictionary search system to investigate the relationship between the performance of such a system and user judgements. We found that, in addition to the position of the desired word in a list of results, the placement of the desired word above or below the fold and the similarity of the other words in the results list affected users’ judgements of the system. We also found that metrics that incorporate the precision of the overall list correlated better with users’ judgements than did metrics currently reported in prior ASL dictionary research.


2017 ◽  
Vol 26 (2) ◽  
pp. 371-385 ◽  
Author(s):  
H.S. Nagendraswamy ◽  
B.M. Chethana Kumara

AbstractRecognition of signs made by deaf people to produce equivalent textual description for normal people to communicate with deaf people is an essential and challenging task for the pattern recognition and image processing research community. Many researchers have made an attempt to standardize and to propose a sign language recognition system. To the best our knowledge, according to the literature survey, most of the work reported has concentrated at the fingerspelling level or at the word level, and less work at the sentence level has been reported. As sign languages are very abstract, fingerspelling or word level interpretation of signs seems to be a tedious and cumbersome task. Although existing research in sign language recognition is active and extensive, it still remains a challenge to achieve accurate recognition and interpretation of signs at the sentence level. In this paper, we made an attempt to address this problem by proposing an approach that exploits the texture description technique and symbolic data analysis concept to characterize and effectively represent a sign, taking into account the intra-class variations due to different signers or the same signers at different instances of time. In order to study the efficacy of the proposed approach, extensive experiments were carried out on a considerably large database of Indian sign language created by us. The experimental results demonstrated that the proposed method has shown good recognition performance in terms of F-measure rates.


2019 ◽  
Vol 7 (2) ◽  
pp. 43
Author(s):  
MALHOTRA POOJA ◽  
K. MANIAR CHIRAG ◽  
V. SANKPAL NIKHIL ◽  
R. THAKKAR HARDIK ◽  
◽  
...  

2016 ◽  
Vol 3 (3) ◽  
pp. 13
Author(s):  
VERMA VERSHA ◽  
PATIL SANDEEP B. ◽  
◽  

2020 ◽  
Vol 14 ◽  
Author(s):  
Vasu Mehra ◽  
Dhiraj Pandey ◽  
Aayush Rastogi ◽  
Aditya Singh ◽  
Harsh Preet Singh

Background:: People suffering from hearing and speaking disabilities have a few ways of communicating with other people. One of these is to communicate through the use of sign language. Objective:: Developing a system for sign language recognition becomes essential for deaf as well as a mute person. The recognition system acts as a translator between a disabled and an able person. This eliminates the hindrances in exchange of ideas. Most of the existing systems are very poorly designed with limited support for the needs of their day to day facilities. Methods:: The proposed system embedded with gesture recognition capability has been introduced here which extracts signs from a video sequence and displays them on screen. On the other hand, a speech to text as well as text to speech system is also introduced to further facilitate the grieved people. To get the best out of human computer relationship, the proposed solution consists of various cutting-edge technologies and Machine Learning based sign recognition models which have been trained by using Tensor Flow and Keras library. Result:: The proposed architecture works better than several gesture recognition techniques like background elimination and conversion to HSV because of sharply defined image provided to the model for classification. The results of testing indicate reliable recognition systems with high accuracy that includes most of the essential and necessary features for any deaf and dumb person in his/her day to day tasks. Conclusion:: It’s the need of current technological advances to develop reliable solutions which can be deployed to assist deaf and dumb people to adjust to normal life. Instead of focusing on a standalone technology, a plethora of them have been introduced in this proposed work. Proposed Sign Recognition System is based on feature extraction and classification. The trained model helps in identification of different gestures.


Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


Sign in / Sign up

Export Citation Format

Share Document