scholarly journals Classification of the Korean Sign Language Alphabet Using an Accelerometer with a Support Vector Machine

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Youngmin Na ◽  
Hyejin Yang ◽  
Jihwan Woo

Recognition and understanding of sign language can aid communication between nondeaf and deaf people. Recently, research groups have developed sign language recognition algorithms using multiple sensors. However, in everyday life, minimizing the number of sensors would still require the use of a sign language interpreter. In this study, a sign language classification method was developed using an accelerometer to recognize the Korean sign language alphabet. The accelerometer is worn on the proximal phalanx of the index finger of the dominant hand. Triaxial accelerometer signals were used to segment the sign gesture (i.e., the time period when a user is performing a sign) and recognize the 31 Korean sign language letters (producing a chance level of 3.2%). The vector sum of the accelerometer signals was used to segment the sign gesture with 98.9% segmentation accuracy, which is comparable to that of previous multisensor systems (99.49%). The system was able to classify the Korean sign language alphabet with 92.2% accuracy. The recognition accuracy of this approach was found to be higher than that of a previous work in the same sign language alphabet classification task. The findings demonstrate that a single-sensor accelerometer with simple features can be reliably used for Korean sign language alphabet recognition in everyday life.

Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 4025
Author(s):  
Zhanjun Hao ◽  
Yu Duan ◽  
Xiaochao Dang ◽  
Yang Liu ◽  
Daiyang Zhang

In recent years, with the development of wireless sensing technology and the widespread popularity of WiFi devices, human perception based on WiFi has become possible, and gesture recognition has become an active topic in the field of human-computer interaction. As a kind of gesture, sign language is widely used in life. The establishment of an effective sign language recognition system can help people with aphasia and hearing impairment to better interact with the computer and facilitate their daily life. For this reason, this paper proposes a contactless fine-grained gesture recognition method using Channel State Information (CSI), namely Wi-SL. This method uses a commercial WiFi device to establish the correlation mapping between the amplitude and phase difference information of the subcarrier level in the wireless signal and the sign language action, without requiring the user to wear any device. We combine an efficient denoising method to filter environmental interference with an effective selection of optimal subcarriers to reduce the computational cost of the system. We also use K-means combined with a Bagging algorithm to optimize the Support Vector Machine (SVM) classification (KSB) model to enhance the classification of sign language action data. We implemented the algorithms and evaluated them for three different scenarios. The experimental results show that the average accuracy of Wi-SL gesture recognition can reach 95.8%, which realizes device-free, non-invasive, high-precision sign language gesture recognition.


2020 ◽  
pp. 1-14
Author(s):  
Qiuhong Tian ◽  
Jiaxin Bao ◽  
Huimin Yang ◽  
Yingrou Chen ◽  
Qiaoli Zhuang

BACKGROUND: For a traditional vision-based static sign language recognition (SLR) system, arm segmentation is a major factor restricting the accuracy of SLR. OBJECTIVE: To achieve accurate arm segmentation for different bent arm shapes, we designed a segmentation method for a static SLR system based on image processing and combined it with morphological reconstruction. METHODS: First, skin segmentation was performed using YCbCr color space to extract the skin-like region from a complex background. Then, the area operator and the location of the mass center were used to remove skin-like regions and obtain the valid hand-arm region. Subsequently, the transverse distance was calculated to distinguish different bent arm shapes. The proposed segmentation method then extracted the hand region from different types of hand-arm images. Finally, the geometric features of the spatial domain were extracted and the sign language image was identified using a support vector machine (SVM) model. Experiments were conducted to determine the feasibility of the method and compare its performance with that of neural network and Euclidean distance matching methods. RESULTS: The results demonstrate that the proposed method can effectively segment skin-like regions from complex backgrounds as well as different bent arm shapes, thereby improving the recognition rate of the SLR system.


Author(s):  
Astri Novianty ◽  
Fairuz Azmi

The World Health Organization (WHO) estimates that over five percent of the world's population are hearing-impaired. One of the communication problems that often arise between deaf or speech impaired with normal people is the low level of knowledge and understanding of the deaf or speech impaired's normal sign language in their daily communication. To overcome this problem, we build a sign language recognition system, especially for the Indonesian language. The sign language system for Bahasa Indonesia, called Bisindo, is unique from the others. Our work utilizes two image processing algorithms for the pre-processing, namely the grayscale conversion and the histogram equalization. Subsequently, the principal component analysis (PCA) is employed for dimensional reduction and feature extraction. Finally, the support vector machine (SVM) is applied as the classifier. Results indicate that the use of the histogram equalization significantly enhances the accuracy of the recognition. Comprehensive experiments by applying different random seeds for testing data confirm that our method achieves 76.8% accuracy. Accordingly, a more robust method is still open to enhance the accuracy in sign language recognition.


2022 ◽  
Author(s):  
Muhammad Shaheer Mirza ◽  
Sheikh Muhammad Munaf ◽  
Shahid Ali ◽  
Fahad Azim ◽  
Saad Jawaid Khan

Abstract In order to perform their daily activities, a person is required to communicating with others. This can be a major obstacle for the deaf population of the world, who communicate using sign languages (SL). Pakistani Sign Language (PSL) is used by more than 250,000 deaf Pakistanis. Developing a SL recognition system would greatly facilitate these people. This study aimed to collect data of static and dynamic PSL alphabets and to develop a vision-based system for their recognition using Bag-of-Words (BoW) and Support Vector Machine (SVM) techniques. A total of 5,120 images for 36 static PSL alphabet signs and 353 videos with 45,224 frames for 3 dynamic PSL alphabet signs were collected from 10 native signers of PSL. The developed system used the collected data as input, resized the data to various scales and converted the RGB images into grayscale. The resized grayscale images were segmented using Thresholding technique and features were extracted using Speeded Up Robust Feature (SURF). The obtained SURF descriptors were clustered using K-means clustering. A BoW was obtained by computing the Euclidean distance between the SURF descriptors and the clustered data. The codebooks were divided into training and testing using 5-fold cross validation. The highest overall classification accuracy for static PSL signs was 97.80% at 750×750 image dimensions and 500 Bags. For dynamic PSL signs a 96.53% accuracy was obtained at 480×270 video resolution and 200 Bags.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3554 ◽  
Author(s):  
Teak-Wei Chong ◽  
Boon-Giin Lee

Sign language is intentionally designed to allow deaf and dumb communities to convey messages and to connect with society. Unfortunately, learning and practicing sign language is not common among society; hence, this study developed a sign language recognition prototype using the Leap Motion Controller (LMC). Many existing studies have proposed methods for incomplete sign language recognition, whereas this study aimed for full American Sign Language (ASL) recognition, which consists of 26 letters and 10 digits. Most of the ASL letters are static (no movement), but certain ASL letters are dynamic (they require certain movements). Thus, this study also aimed to extract features from finger and hand motions to differentiate between the static and dynamic gestures. The experimental results revealed that the sign language recognition rates for the 26 letters using a support vector machine (SVM) and a deep neural network (DNN) are 80.30% and 93.81%, respectively. Meanwhile, the recognition rates for a combination of 26 letters and 10 digits are slightly lower, approximately 72.79% for the SVM and 88.79% for the DNN. As a result, the sign language recognition system has great potential for reducing the gap between deaf and dumb communities and others. The proposed prototype could also serve as an interpreter for the deaf and dumb in everyday life in service sectors, such as at the bank or post office.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5856
Author(s):  
Jungpil Shin ◽  
Akitaka Matsuoka ◽  
Md. Al Mehedi Hasan ◽  
Azmain Yakin Srizon

Sign language is designed to assist the deaf and hard of hearing community to convey messages and connect with society. Sign language recognition has been an important domain of research for a long time. Previously, sensor-based approaches have obtained higher accuracy than vision-based approaches. Due to the cost-effectiveness of vision-based approaches, researchers have been conducted here also despite the accuracy drop. The purpose of this research is to recognize American sign characters using hand images obtained from a web camera. In this work, the media-pipe hands algorithm was used for estimating hand joints from RGB images of hands obtained from a web camera and two types of features were generated from the estimated coordinates of the joints obtained for classification: one is the distances between the joint points and the other one is the angles between vectors and 3D axes. The classifiers utilized to classify the characters were support vector machine (SVM) and light gradient boosting machine (GBM). Three character datasets were used for recognition: the ASL Alphabet dataset, the Massey dataset, and the finger spelling A dataset. The results obtained were 99.39% for the Massey dataset, 87.60% for the ASL Alphabet dataset, and 98.45% for Finger Spelling A dataset. The proposed design for automatic American sign language recognition is cost-effective, computationally inexpensive, does not require any special sensors or devices, and has outperformed previous studies.


Sign in / Sign up

Export Citation Format

Share Document