Controlling Media Player with Hand Gestures using Convolutional Neural Network

Author(s):  
Gayathri Devi Nagalapuram ◽  
Roopashree S ◽  
Varshashree D ◽  
Dheeraj D ◽  
Donal Jovian Nazareth
2017 ◽  
Vol 10 (27) ◽  
pp. 1329-1342 ◽  
Author(s):  
Javier O. Pinzon Arenas ◽  
Robinson Jimenez Moreno ◽  
Paula C. Useche Murillo

This paper presents the implementation of a Region-based Convolutional Neural Network focused on the recognition and localization of hand gestures, in this case 2 types of gestures: open and closed hand, in order to achieve the recognition of such gestures in dynamic backgrounds. The neural network is trained and validated, achieving a 99.4% validation accuracy in gesture recognition and a 25% average accuracy in RoI localization, which is then tested in real time, where its operation is verified through times taken for recognition, execution behavior through trained and untrained gestures, and complex backgrounds.


2019 ◽  
Vol 10 (3) ◽  
pp. 60-73 ◽  
Author(s):  
Ravinder Ahuja ◽  
Daksh Jain ◽  
Deepanshu Sachdeva ◽  
Archit Garg ◽  
Chirag Rajput

Communicating through hand gestures with each other is simply called the language of signs. It is an acceptable language for communication among deaf and dumb people in this society. The society of the deaf and dumb admits a lot of obstacles in day to day life in communicating with their acquaintances. The most recent study done by the World Health Organization reports that very large section (around 360 million folks) present in the world have hearing loss, i.e. 5.3% of the earth's total population. This gives us a need for the invention of an automated system which converts hand gestures into meaningful words and sentences. The Convolutional Neural Network (CNN) is used on 24 hand signals of American Sign Language in order to enhance the ease of communication. OpenCV was used in order to follow up on further execution techniques like image preprocessing. The results demonstrated that CNN has an accuracy of 99.7% utilizing the database found on kaggle.com.


2021 ◽  
pp. 341-349
Author(s):  
Ragapriya Saravanan ◽  
Sindhu Retnaswamy ◽  
Shirley Selvan

Author(s):  
Poonam Yerpude

Abstract: Communication is very imperative for daily life. Normal people use verbal language for communication while people with disabilities use sign language for communication. Sign language is a way of communicating by using the hand gestures and parts of the body instead of speaking and listening. As not all people are familiar with sign language, there lies a language barrier. There has been much research in this field to remove this barrier. There are mainly 2 ways in which we can convert the sign language into speech or text to close the gap, i.e. , Sensor based technique,and Image processing. In this paper we will have a look at the Image processing technique, for which we will be using the Convolutional Neural Network (CNN). So, we have built a sign detector, which will recognise the sign numbers from 1 to 10. It can be easily extended to recognise other hand gestures including alphabets (A- Z) and expressions. We are creating this model based on Indian Sign Language(ISL). Keywords: Multi Level Perceptron (MLP), Convolutional Neural Network (CNN), Indian Sign Language(ISL), Region of interest(ROI), Artificial Neural Network(ANN), VGG 16(CNN vision architecture model), SGD(Stochastic Gradient Descent).


2018 ◽  
Vol 11 (12) ◽  
pp. 547-557 ◽  
Author(s):  
Javier O. Pinzon Arenas ◽  
Robinson Jimenez Moreno ◽  
Ruben Dario Hernandez Beleno

Author(s):  
U. Mamatha

As sign language is used by deaf and dumb but the non-sign-language speaker cannot understand there sign language to overcome the problem we proposed this system using python. In this first we taken the some of the hand gestures are captured using the web camera. The image is pre-processed and then feature are extracted from the captured image .comparing the feature extracted image with the reference image. If matched decision is taken the displayed as a text. This helps the non-sign-language members to recognize easily by using Convolutional neural network layer (CNN) with tensor flow


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Jinhyuck Kim ◽  
Sunwoong Choi

Recognizing and distinguishing the behavior and gesture of a user has become important owing to an increase in the use of wearable devices, such as a smartwatch. This study is aimed at proposing a method for classifying hand gestures by creating sound in the nonaudible frequency range using a smartphone and reflected signal. The proposed method converts the sound data, which has been reflected and recorded, into an image within a short time using short-time Fourier transform, and the obtained data are applied to a convolutional neural network (CNN) model to classify hand gestures. The results showed classification accuracy for 8 hand gestures with an average of 87.75%. Additionally, it is confirmed that the suggested method has a higher classification accuracy than other machine learning classification algorithms.


2020 ◽  
Vol 4 (4) ◽  
pp. 20-27
Author(s):  
Md. Abdur Rahim ◽  
Jungpil Shin ◽  
Keun Soo Yun

Sign language (SL) recognition is intended to connect deaf people with the general population via a variety of perspectives, experiences, and skills that serve as a basis for the development of human-computer interaction. Hand gesture-based SL recognition encompasses a wide range of human capabilities and perspectives. The efficiency of hand gesture performance is still challenging due to the complexity of varying levels of illumination, diversity, multiple aspects, self-identifying parts, different shapes, sizes, and complex backgrounds. In this context, we present an American Sign Language alphabet recognition system that translates sign gestures into text and creates a meaningful sentence from continuously performed gestures. We propose a segmentation technique for hand gestures and present a convolutional neural network (CNN) based on the fusion of features. The input image is captured directly from a video via a low-cost device such as a webcam and is pre-processed by a filtering and segmentation technique, for example the Otsu method. Following this, a CNN is used to extract the features, which are then fused in a fully connected layer. To classify and recognize the sign gestures, a well-known classifier such as Softmax is used. A dataset is proposed for this work that contains only static images of hand gestures, which were collected in a laboratory environment. An analysis of the results shows that our proposed system achieves better recognition accuracy than other state-of-the-art systems.


Sign in / Sign up

Export Citation Format

Share Document