Irish Sign Language Recognition Using Principal Component Analysis and Convolutional Neural Networks

Author(s):  
Marlon Oliveira ◽  
Houssem Chatbri ◽  
Suzanne Little ◽  
Ylva Ferstl ◽  
Noel E. O'Connor ◽  
...  
Author(s):  
Astri Novianty ◽  
Fairuz Azmi

The World Health Organization (WHO) estimates that over five percent of the world's population are hearing-impaired. One of the communication problems that often arise between deaf or speech impaired with normal people is the low level of knowledge and understanding of the deaf or speech impaired's normal sign language in their daily communication. To overcome this problem, we build a sign language recognition system, especially for the Indonesian language. The sign language system for Bahasa Indonesia, called Bisindo, is unique from the others. Our work utilizes two image processing algorithms for the pre-processing, namely the grayscale conversion and the histogram equalization. Subsequently, the principal component analysis (PCA) is employed for dimensional reduction and feature extraction. Finally, the support vector machine (SVM) is applied as the classifier. Results indicate that the use of the histogram equalization significantly enhances the accuracy of the recognition. Comprehensive experiments by applying different random seeds for testing data confirm that our method achieves 76.8% accuracy. Accordingly, a more robust method is still open to enhance the accuracy in sign language recognition.


2021 ◽  
Vol 5 (2 (113)) ◽  
pp. 44-54
Author(s):  
Chingiz Kenshimov ◽  
Samat Mukhanov ◽  
Timur Merembayev ◽  
Didar Yedilkhan

For people with disabilities, sign language is the most important means of communication. Therefore, more and more authors of various papers and scientists around the world are proposing solutions to use intelligent hand gesture recognition systems. Such a system is aimed not only for those who wish to understand a sign language, but also speak using gesture recognition software. In this paper, a new benchmark dataset for Kazakh fingerspelling, able to train deep neural networks, is introduced. The dataset contains more than 10122 gesture samples for 42 alphabets. The alphabet has its own peculiarities as some characters are shown in motion, which may influence sign recognition. Research and analysis of convolutional neural networks, comparison, testing, results and analysis of LeNet, AlexNet, ResNet and EffectiveNet – EfficientNetB7 methods are described in the paper. EffectiveNet architecture is state-of-the-art (SOTA) and is supposed to be a new one compared to other architectures under consideration. On this dataset, we showed that the LeNet and EffectiveNet networks outperform other competing algorithms. Moreover, EffectiveNet can achieve state-of-the-art performance on nother hand gesture datasets. The architecture and operation principle of these algorithms reflect the effectiveness of their application in sign language recognition. The evaluation of the CNN model score is conducted by using the accuracy and penalty matrix. During training epochs, LeNet and EffectiveNet showed better results: accuracy and loss function had similar and close trends. The results of EffectiveNet were explained by the tools of the SHapley Additive exPlanations (SHAP) framework. SHAP explored the model to detect complex relationships between features in the images. Focusing on the SHAP tool may help to further improve the accuracy of the model


Sign in / Sign up

Export Citation Format

Share Document