sign recognition
Recently Published Documents


TOTAL DOCUMENTS

819
(FIVE YEARS 268)

H-INDEX

34
(FIVE YEARS 5)

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 574
Author(s):  
Kanchon Kanti Podder ◽  
Muhammad E. H. Chowdhury ◽  
Anas M. Tahir ◽  
Zaid Bin Mahbub ◽  
Amith Khandakar ◽  
...  

A real-time Bangla Sign Language interpreter can enable more than 200 k hearing and speech-impaired people to the mainstream workforce in Bangladesh. Bangla Sign Language (BdSL) recognition and detection is a challenging topic in computer vision and deep learning research because sign language recognition accuracy may vary on the skin tone, hand orientation, and background. This research has used deep machine learning models for accurate and reliable BdSL Alphabets and Numerals using two well-suited and robust datasets. The dataset prepared in this study comprises of the largest image database for BdSL Alphabets and Numerals in order to reduce inter-class similarity while dealing with diverse image data, which comprises various backgrounds and skin tones. The papers compared classification with and without background images to determine the best working model for BdSL Alphabets and Numerals interpretation. The CNN model trained with the images that had a background was found to be more effective than without background. The hand detection portion in the segmentation approach must be more accurate in the hand detection process to boost the overall accuracy in the sign recognition. It was found that ResNet18 performed best with 99.99% accuracy, precision, F1 score, sensitivity, and 100% specificity, which outperforms the works in the literature for BdSL Alphabets and Numerals recognition. This dataset is made publicly available for researchers to support and encourage further research on Bangla Sign Language Interpretation so that the hearing and speech-impaired individuals can benefit from this research.


Author(s):  
Wei Li ◽  
Haiyu Song ◽  
Pengjie Wang

Traffic sign recognition (TSR) is the basic technology of the Advanced Driving Assistance System (ADAS) and intelligent automobile, whileas high-qualified feature vector plays a key role in TSR. Therefore, the feature extraction of TSR has become an active research in the fields of computer vision and intelligent automobiles. Although deep learning features have made a breakthrough in image classification, it is difficult to apply to TSR because of its large scale of training dataset and high space-time complexity of model training. Considering visual characteristics of traffic signs and external factors such as weather, light, and blur in real scenes, an efficient method to extract high-qualified image features is proposed. As a result, the lower-dimension feature can accurately depict the visual feature of TSR due to powerful descriptive and discriminative ability. In addition, benefiting from a simple feature extraction method and lower time cost, our method is suitable to recognize traffic signs online in real-world applications scenarios. Extensive quantitative experimental results demonstrate the effectiveness and efficiency of our method.


2022 ◽  
Vol 10 (1) ◽  
pp. 0-0

Developing a system for sign language recognition becomes essential for the deaf as well as a mute person. The recognition system acts as a translator between a disabled and an able person. This eliminates the hindrances in the exchange of ideas. Most of the existing systems are very poorly designed with limited support for the needs of their day to day facilities. The proposed system embedded with gesture recognition capability has been introduced here which extracts signs from a video sequence and displays them on screen. On the other hand, a speech to text as well as text to speech system is also introduced to further facilitate the grieved people. To get the best out of a human-computer relationship, the proposed solution consists of various cutting-edge technologies and Machine Learning based sign recognition models that have been trained by using TensorFlow and Keras library. The proposed architecture works better than several gesture recognition techniques like background elimination and conversion to HSV


2021 ◽  
Vol 14 (4) ◽  
pp. 1-33
Author(s):  
Saad Hassan ◽  
Oliver Alonzo ◽  
Abraham Glasser ◽  
Matt Huenerfauth

Advances in sign-language recognition technology have enabled researchers to investigate various methods that can assist users in searching for an unfamiliar sign in ASL using sign-recognition technology. Users can generate a query by submitting a video of themselves performing the sign they believe they encountered somewhere and obtain a list of possible matches. However, there is disagreement among developers of such technology on how to report the performance of their systems, and prior research has not examined the relationship between the performance of search technology and users’ subjective judgements for this task. We conducted three studies using a Wizard-of-Oz prototype of a webcam-based ASL dictionary search system to investigate the relationship between the performance of such a system and user judgements. We found that, in addition to the position of the desired word in a list of results, the placement of the desired word above or below the fold and the similarity of the other words in the results list affected users’ judgements of the system. We also found that metrics that incorporate the precision of the overall list correlated better with users’ judgements than did metrics currently reported in prior ASL dictionary research.


2021 ◽  
Author(s):  
Zhonghua Wei ◽  
Heng Gu ◽  
Ran Zhang ◽  
Jingxuan Peng ◽  
Shi Qui

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Rehman Ullah Khan ◽  
Hizbullah Khattak ◽  
Woei Sheng Wong ◽  
Hussain AlSalman ◽  
Mogeeb A. A. Mosleh ◽  
...  

The deaf-mutes population always feels helpless when they are not understood by others and vice versa. This is a big humanitarian problem and needs localised solution. To solve this problem, this study implements a convolutional neural network (CNN), convolutional-based attention module (CBAM) to recognise Malaysian Sign Language (MSL) from images. Two different experiments were conducted for MSL signs, using CBAM-2DResNet (2-Dimensional Residual Network) implementing “Within Blocks” and “Before Classifier” methods. Various metrics such as the accuracy, loss, precision, recall, F1-score, confusion matrix, and training time are recorded to evaluate the models’ efficiency. The experimental results showed that CBAM-ResNet models achieved a good performance in MSL signs recognition tasks, with accuracy rates of over 90% through a little of variations. The CBAM-ResNet “Before Classifier” models are more efficient than “Within Blocks” CBAM-ResNet models. Thus, the best trained model of CBAM-2DResNet is chosen to develop a real-time sign recognition system for translating from sign language to text and from text to sign language in an easy way of communication between deaf-mutes and other people. All experiment results indicated that the “Before Classifier” of CBAMResNet models is more efficient in recognising MSL and it is worth for future research.


Sign in / Sign up

Export Citation Format

Share Document