scholarly journals 3D Hand Gestures Segmentation and Optimized Classification Using Deep Learning

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Fawad Salam Khan ◽  
Mohd. Norzali Haji Mohd ◽  
Dur Muhammad Soomro ◽  
Susama Bagchi ◽  
M. Danial Khan
Keyword(s):  
2019 ◽  
Vol 2019 ◽  
pp. 1-7 ◽  
Author(s):  
Peng Liu ◽  
Xiangxiang Li ◽  
Haiting Cui ◽  
Shanshan Li ◽  
Yafei Yuan

Hand gesture recognition is an intuitive and effective way for humans to interact with a computer due to its high processing speed and recognition accuracy. This paper proposes a novel approach to identify hand gestures in complex scenes by the Single-Shot Multibox Detector (SSD) deep learning algorithm with 19 layers of a neural network. A benchmark database with gestures is used, and general hand gestures in the complex scene are chosen as the processing objects. A real-time hand gesture recognition system based on the SSD algorithm is constructed and tested. The experimental results show that the algorithm quickly identifies humans’ hands and accurately distinguishes different types of gestures. Furthermore, the maximum accuracy is 99.2%, which is significantly important for human-computer interaction application.


Author(s):  
Dhawal Mali ◽  
Atul Kamble ◽  
Shubham Gogate ◽  
Jignesh Sisodia

Sensors ◽  
2019 ◽  
Vol 19 (8) ◽  
pp. 1952 ◽  
Author(s):  
Wentao Sun ◽  
Huaxin Liu ◽  
Rongyu Tang ◽  
Yiran Lang ◽  
Jiping He ◽  
...  

Conventional pattern-recognition algorithms for surface electromyography (sEMG)-based hand-gesture classification have difficulties in capturing the complexity and variability of sEMG. The deep structures of deep learning enable the method to learn high-level features of data to improve both accuracy and robustness of a classification. However, the features learned through deep learning are incomprehensible, and this issue has precluded the use of deep learning in clinical applications where model comprehension is required. In this paper, a generative flow model (GFM), which is a recent flourishing branch of deep learning, is used with a SoftMax classifier for hand-gesture classification. The proposed approach achieves 63 . 86 ± 5 . 12 % accuracy in classifying 53 different hand gestures from the NinaPro database 5. The distribution of all 53 hand gestures is modelled by the GFM, and each dimension of the feature learned by the GFM is comprehensible using the reverse flow of the GFM. Moreover, the feature appears to be related to muscle synergy to some extent.


Electronics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 182
Author(s):  
Aveen Dayal ◽  
Naveen Paluru ◽  
Linga Reddy Cenkeramaddi ◽  
Soumya J. ◽  
Phaneendra K. Yalavarthy

Hand gestures based sign language digits have several contactless applications. Applications include communication for impaired people, such as elderly and disabled people, health-care applications, automotive user interfaces, and security and surveillance. This work presents the design and implementation of a complete end-to-end deep learning based edge computing system that can verify a user contactlessly using ‘authentication code’. The ‘authentication code’ is an ‘n’ digit numeric code and the digits are hand gestures of sign language digits. We propose a memory-efficient deep learning model to classify the hand gestures of the sign language digits. The proposed deep learning model is based on the bottleneck module which is inspired by the deep residual networks. The model achieves classification accuracy of 99.1% on the publicly available sign language digits dataset. The model is deployed on a Raspberry pi 4 Model B edge computing system to serve as an edge device for user verification. The edge computing system consists of two steps, it first takes input from the camera attached to it in real-time and stores it in the buffer. In the second step, the model classifies the digit with the inference rate of 280 ms, by taking the first image in the buffer as input.


2021 ◽  
Author(s):  
Mahyudin Ritonga ◽  
Rasha M.Abd El-Aziz ◽  
Varsha Dr. ◽  
Maulik Bader Alazzam ◽  
Fawaz Alassery ◽  
...  

Abstract Exceptional research activities have been endorsed by the Arabic Sign Language for recognizing gestures and hand signs utilizing the deep learning model. Sign languages refer to the gestures, which are utilized by hearing impaired people for communication. These gestures are complex for understanding by normal people. Due to variation of Arabic Sign Language (ArSL) from one territory to another territory or between countries, the recognition of Arabic Sign Language (ArSL) became an arduous research problem. The recognition of Arabic Sign Language has been learned and implemented utilizing multiple traditional and intelligent approaches and there were only less attempts made for enhancing the process with the help of deep learning networks. The proposed system here encapsulates a Convolutional Neural Network (CNN) based machine learning technique, which utilizes wearable sensors for recognition of the Arabic Sign Language (ArSL). The model suits to all local Arabic gestures, which are used by the hearing-impaired people of the local Arabic community. The proposed system has a reasonable and moderate accuracy. Initially a deep Convolutional network is built for feature extraction, which is extracted from the collected data by the wearable sensors. These sensors are used for recognizing accurately the 30 hand sign letters of the Arabic sign language. DG5-V hand gloves embedded with wearable sensors are used for capturing the hand movements in the dataset. The CNN approach is utilized for the classification purpose. The hand gestures of the Arabic sign language are the input and the vocalized speech is the output of the proposed system. The results achieved a recognition rate of 90%. The proposed system was found highly efficient for translating hand gestures of the Arabic Sign Language into speech and writing.


2021 ◽  
Vol 14 (1) ◽  
pp. 316-325
Author(s):  
Eman Elsayed ◽  
◽  
Doaa Fathy ◽  

Dynamic Sign Language Recognition aims to recognize hand gestures of any person. Dynamic Sign Language Recognition systems have challenges in recognizing the semantic of hand gestures. These challenges come from the personal differences in hand signs from one person to another. Real-life video gesture frames couldn’t be treated as frame-level as a static sign. This research proposes a semantic translation system for dynamic hand gestures using deep learning and ontology. We used the proposed MSLO (Multi Sign Language Ontology) in the semantic translation step. Also, any user can retrain the system to be a personal one. We used Three-dimensional Convolutional Neural Networks followed by Convolutional long short-term memory to improve the recognition accuracy in Dynamic sign language recognition. We applied the proposed system on three dynamic gesture datasets from color videos. The recognition accuracy average was 97.4%. We did all the training and testing processes on the Graphics Processing Unit with the support of Google Colab. Using "Google Colab" in the training process decreases the average run time by about 87.9%. In addition to adding semantic in dynamic sign language translation, the proposed system achieves good results compared to some dynamic sign language recognition systems.


Sign in / Sign up

Export Citation Format

Share Document