sign language recognition
Recently Published Documents


TOTAL DOCUMENTS

1114
(FIVE YEARS 446)

H-INDEX

42
(FIVE YEARS 8)

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 574
Author(s):  
Kanchon Kanti Podder ◽  
Muhammad E. H. Chowdhury ◽  
Anas M. Tahir ◽  
Zaid Bin Mahbub ◽  
Amith Khandakar ◽  
...  

A real-time Bangla Sign Language interpreter can enable more than 200 k hearing and speech-impaired people to the mainstream workforce in Bangladesh. Bangla Sign Language (BdSL) recognition and detection is a challenging topic in computer vision and deep learning research because sign language recognition accuracy may vary on the skin tone, hand orientation, and background. This research has used deep machine learning models for accurate and reliable BdSL Alphabets and Numerals using two well-suited and robust datasets. The dataset prepared in this study comprises of the largest image database for BdSL Alphabets and Numerals in order to reduce inter-class similarity while dealing with diverse image data, which comprises various backgrounds and skin tones. The papers compared classification with and without background images to determine the best working model for BdSL Alphabets and Numerals interpretation. The CNN model trained with the images that had a background was found to be more effective than without background. The hand detection portion in the segmentation approach must be more accurate in the hand detection process to boost the overall accuracy in the sign recognition. It was found that ResNet18 performed best with 99.99% accuracy, precision, F1 score, sensitivity, and 100% specificity, which outperforms the works in the literature for BdSL Alphabets and Numerals recognition. This dataset is made publicly available for researchers to support and encourage further research on Bangla Sign Language Interpretation so that the hearing and speech-impaired individuals can benefit from this research.


Author(s):  
Dr. Pooja M R ◽  
◽  
Meghana M ◽  
Harshith Bhaskar ◽  
Anusha Hulatti ◽  
...  

We witness many people who face disabilities like being deaf, dumb, blind etc. They face a lot of challenges and difficulties trying to interact and communicate with others. This paper presents a new technique by providing a virtual solution without making use of any sensors. Histogram Oriented Gradient (HOG) along with Artificial Neural Network (ANN) have been implemented. The user makes use of web camera, which takes input from the user and processes the image of different gestures. The algorithm recognizes the image and identifies the pending voice input. This paper explains two way means of communication between impaired and normal people which implies that the proposed ideology can convert sign language to text and voice.


Author(s):  
Dr. Pooja M R ◽  
◽  
Meghana M ◽  
Harshith Bhaskar ◽  
Anusha Hulatti ◽  
...  

We witness many people who face disabilities like being deaf, dumb, blind etc. They face a lot of challenges and difficulties trying to interact and communicate with others. This paper presents a new technique by providing a virtual solution without making use of any sensors. Histogram Oriented Gradient (HOG) along with Artificial Neural Network (ANN) have been implemented. The user makes use of web camera, which takes input from the user and processes the image of different gestures. The algorithm recognizes the image and identifies the pending voice input. This paper explains two way means of communication between impaired and normal people which implies that the proposed ideology can convert sign language to text and voice.


2022 ◽  
Author(s):  
Muhammad Shaheer Mirza ◽  
Sheikh Muhammad Munaf ◽  
Shahid Ali ◽  
Fahad Azim ◽  
Saad Jawaid Khan

Abstract In order to perform their daily activities, a person is required to communicating with others. This can be a major obstacle for the deaf population of the world, who communicate using sign languages (SL). Pakistani Sign Language (PSL) is used by more than 250,000 deaf Pakistanis. Developing a SL recognition system would greatly facilitate these people. This study aimed to collect data of static and dynamic PSL alphabets and to develop a vision-based system for their recognition using Bag-of-Words (BoW) and Support Vector Machine (SVM) techniques. A total of 5,120 images for 36 static PSL alphabet signs and 353 videos with 45,224 frames for 3 dynamic PSL alphabet signs were collected from 10 native signers of PSL. The developed system used the collected data as input, resized the data to various scales and converted the RGB images into grayscale. The resized grayscale images were segmented using Thresholding technique and features were extracted using Speeded Up Robust Feature (SURF). The obtained SURF descriptors were clustered using K-means clustering. A BoW was obtained by computing the Euclidean distance between the SURF descriptors and the clustered data. The codebooks were divided into training and testing using 5-fold cross validation. The highest overall classification accuracy for static PSL signs was 97.80% at 750×750 image dimensions and 500 Bags. For dynamic PSL signs a 96.53% accuracy was obtained at 480×270 video resolution and 200 Bags.


2022 ◽  
Vol 10 (1) ◽  
pp. 0-0

Developing a system for sign language recognition becomes essential for the deaf as well as a mute person. The recognition system acts as a translator between a disabled and an able person. This eliminates the hindrances in the exchange of ideas. Most of the existing systems are very poorly designed with limited support for the needs of their day to day facilities. The proposed system embedded with gesture recognition capability has been introduced here which extracts signs from a video sequence and displays them on screen. On the other hand, a speech to text as well as text to speech system is also introduced to further facilitate the grieved people. To get the best out of a human-computer relationship, the proposed solution consists of various cutting-edge technologies and Machine Learning based sign recognition models that have been trained by using TensorFlow and Keras library. The proposed architecture works better than several gesture recognition techniques like background elimination and conversion to HSV


Sign in / Sign up

Export Citation Format

Share Document