scholarly journals An IoT based Game Controlling Device (JoyGlove)

Author(s):  
Harsh Ganpatbhai Rana ◽  
Aayushi Sanjay Soni

Nowadays, there is a huge inclination of youngsters and adults towards video games and of course in our childhood everyone might definitely have experienced it. However, we are controlling the game using typical input devices such as mouse, keyboard, joystick, etc but how about, if we control the game using our hand gestures ? These days we have lots of game controllers that are surpassing the gaming experience however they are quite expensive too. Through this project we have designed our own game controlling glove using Arduino. In addition we have also developed a car game using UNITY 3D. The areas of IOT and hand gesture recognition will be explored by this project.

2018 ◽  
Vol 14 (7) ◽  
pp. 155014771879075 ◽  
Author(s):  
Kiwon Rhee ◽  
Hyun-Chool Shin

In the recognition of electromyogram-based hand gestures, the recognition accuracy may be degraded during the actual stage of practical applications for various reasons such as electrode positioning bias and different subjects. Besides these, the change in electromyogram signals due to different arm postures even for identical hand gestures is also an important issue. We propose an electromyogram-based hand gesture recognition technique robust to diverse arm postures. The proposed method uses both the signals of the accelerometer and electromyogram simultaneously to recognize correct hand gestures even for various arm postures. For the recognition of hand gestures, the electromyogram signals are statistically modeled considering the arm postures. In the experiments, we compared the cases that took into account the arm postures with the cases that disregarded the arm postures for the recognition of hand gestures. In the cases in which varied arm postures were disregarded, the recognition accuracy for correct hand gestures was 54.1%, whereas the cases using the method proposed in this study showed an 85.7% average recognition accuracy for hand gestures, an improvement of more than 31.6%. In this study, accelerometer and electromyogram signals were used simultaneously, which compensated the effect of different arm postures on the electromyogram signals and therefore improved the recognition accuracy of hand gestures.


2019 ◽  
Vol 2019 ◽  
pp. 1-7 ◽  
Author(s):  
Peng Liu ◽  
Xiangxiang Li ◽  
Haiting Cui ◽  
Shanshan Li ◽  
Yafei Yuan

Hand gesture recognition is an intuitive and effective way for humans to interact with a computer due to its high processing speed and recognition accuracy. This paper proposes a novel approach to identify hand gestures in complex scenes by the Single-Shot Multibox Detector (SSD) deep learning algorithm with 19 layers of a neural network. A benchmark database with gestures is used, and general hand gestures in the complex scene are chosen as the processing objects. A real-time hand gesture recognition system based on the SSD algorithm is constructed and tested. The experimental results show that the algorithm quickly identifies humans’ hands and accurately distinguishes different types of gestures. Furthermore, the maximum accuracy is 99.2%, which is significantly important for human-computer interaction application.


2016 ◽  
Vol 11 (1) ◽  
pp. 30-35
Author(s):  
Manoj Acharya ◽  
Dibakar Raj Pant

This paper proposes a method to recognize static hand gestures in an image or video where a person is performing Nepali Sign Language (NSL) and translate it to words and sentences. The classification is carried out using Neural Network where contour of the hand is used as the feature. The work is verified successfully for NSL recognition using signer dependency analysis. Journal of the Institute of Engineering, 2015, 11(1): 30-35


2020 ◽  
Vol 7 (2) ◽  
pp. 164
Author(s):  
Aditiya Anwar ◽  
Achmad Basuki ◽  
Riyanto Sigit

<p><em>Hand gestures are the communication ways for the deaf people and the other. Each hand gesture has a different meaning.  In order to better communicate, we need an automatic translator who can recognize hand movements as a word or sentence in communicating with deaf people. </em><em>This paper proposes a system to recognize hand gestures based on Indonesian Sign Language Standard. This system uses Myo Armband as hand gesture sensors. Myo Armband has 21 sensors to express the hand gesture data. Recognition process uses a Support Vector Machine (SVM) to classify the hand gesture based on the dataset of Indonesian Sign Language Standard. SVM yields the accuracy of 86.59% to recognize hand gestures as sign language.</em></p><p><em><strong>Keywords</strong></em><em>: </em><em>Hand Gesture Recognition, Feature Extraction, Indonesian Sign Language, Myo Armband, Moment Invariant</em></p>


2019 ◽  
Vol 8 (4) ◽  
pp. 1027-1029

In today’s world gesture recognition technologies are much newer. Hand gesture recognition is well known done using glove based technique. In our project the vehicle that can be controlled by hand gestures. The gesture controlled robot car is now we controlled by our hand sign and not in our older days controlled using buttons. The controller just they need to wear some small transmission sensor in his hand which acts as accelerometer and at end it receives signals using RF receiver sensor in the car. We refer this idea in some previous projects and we implement some extra features in our project. We uses different sensor to implement better. In previous cases there are some transmission problems but in this we alter the transmission phase and we also fixed extra antenna to transmit extra range of signals. The microcontrollers controls the movement of the robot in the same direction as our hand moves.


2020 ◽  
Author(s):  
David Huerta ◽  
Eric Crawford ◽  
Scott Brown

Human Computer Interaction (HCI) has been redefined in this era. People want to interact with their devices in such a way that has physical significance in the real world, in other words, they want ergonomic input devices. In this paper, we propose a new method of interaction with computing devices having a consumer grade camera, that uses two colored markers (red and green) worn on tips of the fingers to generate desired hand gestures, and for marker detection and tracking we usedtemplate matching with kalman filter. We have implemented all the usual system commands, i.e., cursor movement, right click, left click, double click, going forward and backward, zoom in and out through different hand gestures. Our system caneasily recognize these gestures and give corresponding system commands. Our system is suitable for both desktop devices and devices where touch screen is not feasible like large screens or projected screens.


Author(s):  
Priyanka R. ◽  
Prahanya Sriram ◽  
Jayasree L. N. ◽  
Angelin Gladston

Gesture recognition is the most intuitive form of human-computer interface. Hand gestures provide a natural way for humans to interact with computers to perform a variety of different applications. However, factors such as complexity of hand gesture structures, differences in hand size, hand posture, and environmental illumination can influence the performance of hand gesture recognition algorithms. Considering the above factors, this paper aims to present a real time system for hand gesture recognition on the basis of detection of some meaningful shape-based features like orientation, center of mass, status of fingers, thumb in terms of raised or folded fingers of hand and their respective location in image. The internet is growing at a very fast pace. The use of web browser is also growing. Everyone has at least two or three most frequently visited website. Thus, in this paper, effectiveness of the gesture recognition and its ability to control the browser via the recognized hand gestures are experimented and the results are analyzed.


Sensors ◽  
2019 ◽  
Vol 19 (16) ◽  
pp. 3548 ◽  
Author(s):  
Piotr Kaczmarek ◽  
Tomasz Mańkowski ◽  
Jakub Tomczyński

In this paper, we present a putEMG dataset intended for the evaluation of hand gesture recognition methods based on sEMG signal. The dataset was acquired for 44 able-bodied subjects and include 8 gestures (3 full hand gestures, 4 pinches and idle). It consists of uninterrupted recordings of 24 sEMG channels from the subject’s forearm, RGB video stream and depth camera images used for hand motion tracking. Moreover, exemplary processing scripts are also published. The putEMG dataset is available under a Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0). The dataset was validated regarding sEMG amplitudes and gesture recognition performance. The classification was performed using state-of-the-art classifiers and feature sets. An accuracy of 90% was achieved for SVM classifier utilising RMS feature and for LDA classifier using Hudgin’s and Du’s feature sets. Analysis of performance for particular gestures showed that LDA/Du combination has significantly higher accuracy for full hand gestures, while SVM/RMS performs better for pinch gestures. The presented dataset can be used as a benchmark for various classification methods, the evaluation of electrode localisation concepts, or the development of classification methods invariant to user-specific features or electrode displacement.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4566
Author(s):  
Chanhwi Lee ◽  
Jaehan Kim ◽  
Seoungbae Cho ◽  
Jinwoong Kim ◽  
Jisang Yoo ◽  
...  

The use of human gesturing to interact with devices such as computers or smartphones has presented several problems. This form of interaction relies on gesture interaction technology such as Leap Motion from Leap Motion, Inc, which enables humans to use hand gestures to interact with a computer. The technology has excellent hand detection performance, and even allows simple games to be played using gestures. Another example is the contactless use of a smartphone to take a photograph by simply folding and opening the palm. Research on interaction with other devices via hand gestures is in progress. Similarly, studies on the creation of a hologram display from objects that actually exist are also underway. We propose a hand gesture recognition system that can control the Tabletop holographic display based on an actual object. The depth image obtained using the latest Time-of-Flight based depth camera Azure Kinect is used to obtain information about the hand and hand joints by using the deep-learning model CrossInfoNet. Using this information, we developed a real time system that defines and recognizes gestures indicating left, right, up, and down basic rotation, and zoom in, zoom out, and continuous rotation to the left and right.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 2106 ◽  
Author(s):  
Linchu Yang ◽  
Ji’an Chen ◽  
Weihang Zhu

Dynamic hand gesture recognition is one of the most significant tools for human–computer interaction. In order to improve the accuracy of the dynamic hand gesture recognition, in this paper, a two-layer Bidirectional Recurrent Neural Network for the recognition of dynamic hand gestures from a Leap Motion Controller (LMC) is proposed. In addition, based on LMC, an efficient way to capture the dynamic hand gestures is identified. Dynamic hand gestures are represented by sets of feature vectors from the LMC. The proposed system has been tested on the American Sign Language (ASL) datasets with 360 samples and 480 samples, and the Handicraft-Gesture dataset, respectively. On the ASL dataset with 360 samples, the system achieves accuracies of 100% and 96.3% on the training and testing sets. On the ASL dataset with 480 samples, the system achieves accuracies of 100% and 95.2%. On the Handicraft-Gesture dataset, the system achieves accuracies of 100% and 96.7%. In addition, 5-fold, 10-fold, and Leave-One-Out cross-validation are performed on these datasets. The accuracies are 93.33%, 94.1%, and 98.33% (360 samples), 93.75%, 93.5%, and 98.13% (480 samples), and 88.66%, 90%, and 92% on ASL and Handicraft-Gesture datasets, respectively. The developed system demonstrates similar or better performance compared to other approaches in the literature.


Sign in / Sign up

Export Citation Format

Share Document