hand gestures
Recently Published Documents


TOTAL DOCUMENTS

939
(FIVE YEARS 356)

H-INDEX

34
(FIVE YEARS 6)

Author(s):  
Poonam Yerpude

Abstract: Communication is very imperative for daily life. Normal people use verbal language for communication while people with disabilities use sign language for communication. Sign language is a way of communicating by using the hand gestures and parts of the body instead of speaking and listening. As not all people are familiar with sign language, there lies a language barrier. There has been much research in this field to remove this barrier. There are mainly 2 ways in which we can convert the sign language into speech or text to close the gap, i.e. , Sensor based technique,and Image processing. In this paper we will have a look at the Image processing technique, for which we will be using the Convolutional Neural Network (CNN). So, we have built a sign detector, which will recognise the sign numbers from 1 to 10. It can be easily extended to recognise other hand gestures including alphabets (A- Z) and expressions. We are creating this model based on Indian Sign Language(ISL). Keywords: Multi Level Perceptron (MLP), Convolutional Neural Network (CNN), Indian Sign Language(ISL), Region of interest(ROI), Artificial Neural Network(ANN), VGG 16(CNN vision architecture model), SGD(Stochastic Gradient Descent).


2022 ◽  
Author(s):  
Zhiwen Zheng ◽  
Nan Yu ◽  
Jingyang Zhang ◽  
Haipeng Dai ◽  
Qingshan Wang ◽  
...  

Abstract This paper proposes using a WiFi-based identification system, Wi-ID, to identify users from their unique hand gestures. Hand gestures from the popular game rock-paper-scissors are utilized for the system’s user authentication commands. The whole feature of three hand gestures is extracted instead of the single gesture feature extracted by the existing methods. Dynamic time warping (DTW) is utilized to analyze the amplitude information in the time domain based on linear discriminant analysis (LDA), while extract amplitude kurtosis (AP-KU) and shape skewness (SP-SK) are utilized to analyze the Wi-Fi signals energy distribution in the frequency domain. Based on the contributions of the extracted features, the random forests algorithm is utilized for weight inputs in the LSTM model. The experiment is conducted on a computer installed with an Intel 5300 wireless networking card to evaluate the effectiveness and robustness of the Wi-ID system. The experiment results showed the accuracy of the proposed Wi-ID system has a personal differentiation accuracy rate over 92%, and with an average accuracy of 96%. Authorized persons who performed incomplete hand gestures are identified with an accuracy of 92% and hostile intruders can be identified with a probability of 90%. Such performance demonstrates that the Wi-ID system achieved the aim of user authentication.


2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Khalid Twarish Alhamazani ◽  
Jalawi Alshudukhi ◽  
Talal Saad Alharbi ◽  
Saud Aljaloud ◽  
Zelalem Meraf

In recent years, in combination with technological advances, new paradigms of interaction with the user have emerged. This has motivated the industry to create increasingly powerful and accessible natural user interface devices. In particular, depth cameras have achieved high levels of user adoption. These devices include the Microsoft Kinect, the Intel RealSense, and the Leap Motion Controller. This type of device facilitates the acquisition of data in human activity recognition. Hand gestures can be static or dynamic, depending on whether they present movement in the image sequences. Hand gesture recognition enables human-computer interaction (HCI) system developers to create more immersive, natural, and intuitive experiences and interactions. However, this task is not easy. That is why, in the academy, this problem has been addressed using machine learning techniques. The experiments carried out have shown very encouraging results indicating that the choice of this type of architecture allows obtaining an excellent efficiency of parameters and prediction times. It should be noted that the tests are carried out on a set of relevant data from the area. Based on this, the performance of this proposal is analysed about different scenarios such as lighting variation or camera movement, different types of gestures, and sensitivity or bias by people, among others. In this article, we will look at how infrared camera images can be used to segment, classify, and recognise one-handed gestures in a variety of lighting conditions. A standard webcam was modified, and an infrared filter was added to the lens to create the infrared camera. The scene was illuminated by additional infrared LED structures, allowing it to be used in various lighting conditions.


Author(s):  
R. Jisha Raj ◽  
Smitha Dharan ◽  
T. T. Sunil

Cultural dances are practiced all over the world. The study of various gestures of the performer using computer vision techniques can help in better understanding of these dance forms and for annotation purposes. Bharatanatyam is a classical dance that originated in South India. Bharatanatyam performer uses hand gestures (mudras), facial expressions and body movements to communicate to the audience the intended meaning. According to Natyashastra, a classical text on Indian dance, there are 28 Asamyukta Hastas (single-hand gestures) and 23 Samyukta Hastas (Double-hand gestures) in Bharatanatyam. Open datasets on Bharatanatyam dance gestures are not presently available. An exhaustive open dataset comprising of various mudras in Bharatanatyam was created. The dataset consists of 15[Formula: see text]396 distinct single-hand mudra images and 13[Formula: see text]035 distinct double-hand mudra images. In this paper, we explore the dataset using various multidimensional visualization techniques. PCA, Kernel PCA, Local Linear Embedding, Multidimensional Scaling, Isomap, t-SNE and PCA–t-SNE combination are being investigated. The best visualization for exploration of the dataset is obtained using PCA–t-SNE combination.


2021 ◽  
Author(s):  
Puru Lokendra Singh ◽  
Samidha Mridul Verma ◽  
Ankit Vijayvargiya ◽  
Rajesh Kumar
Keyword(s):  

Sign language recognition is important for natural and convenient communication between deaf community and hearing majority. Hand gestures are a form of nonverbal communication that makes up the bulk of the communication between mute individuals, as sign language constitutes largely of hand gestures. Research works based on hand gestures have adopted many different techniques, including those based on instrumented sensor technology and computer vision. In other words, the hand sign can be classified under many headings, such as posture and gesture, as well as dynamic and static, or a hybrid of the two. This paper focuses on a review of the literature on computer based sign language recognition approaches, their motivations, techniques, observed limitations and suggestion for improvement.


Author(s):  
Yujie Li ◽  
Osamu Hanaoka ◽  
Shuo Yang ◽  
Seiichi Serikawa

Author(s):  
Gavin Elliott ◽  
Kevin Meehan ◽  
Jennifer Hyndman
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document