scholarly journals Hand Gesture Recognition System using Deep Learning

Hand motion acknowledgment is a characteristic method for human PC association and a zone of dynamic research in PC vision and AI. This is a zone with a wide range of conceivable applications, giving clients an easier and increasingly normal approach to speak with robots/frameworks interfaces, without the requirement for additional gadgets. Along these lines, the essential objective of signal acknowledgment explore connected to Human-Computer Interaction (HCI) is to make frameworks, which can distinguish explicit human motions and use them to pass on data or controlling gadgets. For that, vision-based hand signal interfaces require quick and incredibly strong hand discovery, and motion acknowledgment continuously. This paper introduces an answer, sufficiently nonexclusive, with the assistance of deep learning, permitting its application in a wide scope of human-PC interfaces, for ongoing motion acknowledgment. Investigations did demonstrated that the framework had the capacity to accomplish a precision of 99.4% as far as hand act acknowledgment and a normal exactness of 93.72% as far as unique signal acknowledgment.

2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Hong-Min Zhu ◽  
Chi-Man Pun

We propose an adaptive and robust superpixel based hand gesture tracking system, in which hand gestures drawn in free air are recognized from their motion trajectories. First we employed the motion detection of superpixels and unsupervised image segmentation to detect the moving target hand using the first few frames of the input video sequence. Then the hand appearance model is constructed from its surrounding superpixels. By incorporating the failure recovery and template matching in the tracking process, the target hand is tracked by an adaptive superpixel based tracking algorithm, where the problem of hand deformation, view-dependent appearance invariance, fast motion, and background confusion can be well handled to extract the correct hand motion trajectory. Finally, the hand gesture is recognized by the extracted motion trajectory with a trained SVM classifier. Experimental results show that our proposed system can achieve better performance compared to the existing state-of-the-art methods with the recognition accuracy 99.17% for easy set and 98.57 for hard set.


2019 ◽  
Vol 2019 ◽  
pp. 1-7 ◽  
Author(s):  
Peng Liu ◽  
Xiangxiang Li ◽  
Haiting Cui ◽  
Shanshan Li ◽  
Yafei Yuan

Hand gesture recognition is an intuitive and effective way for humans to interact with a computer due to its high processing speed and recognition accuracy. This paper proposes a novel approach to identify hand gestures in complex scenes by the Single-Shot Multibox Detector (SSD) deep learning algorithm with 19 layers of a neural network. A benchmark database with gestures is used, and general hand gestures in the complex scene are chosen as the processing objects. A real-time hand gesture recognition system based on the SSD algorithm is constructed and tested. The experimental results show that the algorithm quickly identifies humans’ hands and accurately distinguishes different types of gestures. Furthermore, the maximum accuracy is 99.2%, which is significantly important for human-computer interaction application.


Presently multi day's robot is constrained by remote or mobile phone or by direct wired association. In the event that we pondering expense and required equipment, this things builds the unpredictability, particularly for low dimension application. Presently the robot that we have structured is not quite the same as over one. It doesn't require any kind of remote or any correspondence module. it is a self-enacted robot, which drives itself as indicated by the position of a client who remains before it. It does what the client wants to do. it makes a duplicate, all things considered, development of the client remaining before it. Equipment required is little, and henceforth minimal effort and little in size. Of late, there has been a flood in enthusiasm for perceiving human Hand signal controlled robot. Hand motion acknowledgment has a few uses, for example, PC amusements, gaming machines, as mouse substitution and apparatus controlled robot (for example crane, medical procedure machines, apply autonomy, counterfeit intelligence


Gesture Recognition is a major area in HumanComputer Interaction (HCI). HCI allows computers to capture and interpret human gestures as commands. A real-time Hand Gesture Recognition System is implemented and is used for operating electronic appliances. This system is implemented using the deep learning models such as the Convolution Neural Network (CNN) and the Recurrent Neural Network (RNN). The combined model will effectively recognize both static and dynamic hand gestures. Also the model accuracy while using VGG16 pre-trained CNN model is investigated.


2021 ◽  
Author(s):  
Zhengjie Wang ◽  
Xue Song ◽  
Jingwen Fan ◽  
Fang Chen ◽  
Naisheng Zhou ◽  
...  

2020 ◽  
Vol 10 (15) ◽  
pp. 5293 ◽  
Author(s):  
Rebeen Ali Hamad ◽  
Longzhi Yang ◽  
Wai Lok Woo ◽  
Bo Wei

Human activity recognition has become essential to a wide range of applications, such as smart home monitoring, health-care, surveillance. However, it is challenging to deliver a sufficiently robust human activity recognition system from raw sensor data with noise in a smart environment setting. Moreover, imbalanced human activity datasets with less frequent activities create extra challenges for accurate activity recognition. Deep learning algorithms have achieved promising results on balanced datasets, but their performance on imbalanced datasets without explicit algorithm design cannot be promised. Therefore, we aim to realise an activity recognition system using multi-modal sensors to address the issue of class imbalance in deep learning and improve recognition accuracy. This paper proposes a joint diverse temporal learning framework using Long Short Term Memory and one-dimensional Convolutional Neural Network models to improve human activity recognition, especially for less represented activities. We extensively evaluate the proposed method for Activities of Daily Living recognition using binary sensors dataset. A comparative study on five smart home datasets demonstrate that our proposed approach outperforms the existing individual temporal models and their hybridization. Furthermore, this is particularly the case for minority classes in addition to reasonable improvement on the majority classes of human activities.


Author(s):  
Dina Satybaldina ◽  
Gulzia Kalymova

Hand gesture recognition becomes a popular topic of deep learning and provides many application fields for bridging the human–computer barrier and has a positive impact on our daily life. The primary idea of our project is a static gesture acquisition from depth camera and to process the input images to train the deep convolutional neural network pre-trained on ImageNet dataset. Proposed system consists of gesture capture device (Intel® RealSense™ depth camera D435), pre-processing and image segmentation algorithms, feature extraction algorithm and object classification. For pre-processing and image segmentation algorithms computer vision methods from the OpenCV and Intel Real Sense libraries are used. The subsystem for features extracting and gestures classification is based on the modified VGG-16 by using the TensorFlow&Keras deep learning framework. Performance of the static gestures recognition system is evaluated using maching learning metrics. Experimental results show that the proposed model, trained on a database of 2000 images, provides high recognition accuracy both at the training and testing stages.


Author(s):  
Lery Sakti Ramba

The purpose of this research is to design home automation system that can be controlled using voice commands. This research was conducted by studying other research related to the topics in this research, discussing with competent parties, designing systems, testing systems, and conducting analyzes based on tests that have been done. In this research voice recognition system was designed using Deep Learning Convolutional Neural Networks (DL-CNN). The CNN model that has been designed will then be trained to recognize several kinds of voice commands. The result of this research is a speech recognition system that can be used to control several electronic devices connected to the system. The speech recognition system in this research has a 100% success rate in room conditions with background intensity of 24dB (silent), 67.67% in room conditions with 42dB background noise intensity, and only 51.67% in room conditions with background intensity noise 52dB (noisy). The percentage of the success of the speech recognition system in this research is strongly influenced by the intensity of background noise in a room. Therefore, to obtain optimal results, the speech recognition system in this research is more suitable for use in rooms with low intensity background noise.


Sign in / Sign up

Export Citation Format

Share Document