scholarly journals Architecture Design and VLSI Implementation of 3D Hand Gesture Recognition System

Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6724
Author(s):  
Tsung-Han Tsai ◽  
Yih-Ru Tsai

With advancements in technology, more and more research is being focused on enhancing daily life quality and convenience. Along with the increase in the development of gesture control systems, many controllers, such as the keyboard, mouse, and other devices, have been replaced with remote control products, which are gradually becoming more intuitive for users. However, vision-based hand gesture recognition systems still have many problems to overcome. Most hand detection methods adopt a skin filter or motion filter for pre-processing. However, in a noisy environment, it is not easy to correctly extract interesting objects. In this paper, a VLSI design with dual-cameras has been proposed to construct a depth map with a stereo matching algorithm and recognize hand gestures. The proposed system adopts an adaptive depth filter to separate interesting foreground objects from the background. We also propose dynamic gesture recognition using depth and coordinate information. The system can perform static and dynamic gesture recognition. The ASIC design is implemented in TSMC 90 nm with about 47.3 K gate counts, and 27.8 mW of power consumption. The average accuracy of each gesture recognition is 83.98%.

2020 ◽  
Vol 17 (4) ◽  
pp. 1764-1769
Author(s):  
S. Gobhinath ◽  
T. Vignesh ◽  
R. Pavankumar ◽  
R. Kishore ◽  
K. S. Koushik

This paper presents about an overview on several methods of segmentation techniques for hand gesture recognition. Hand gesture recognition has evolved tremendously in the recent years because of its ability to interact with machine. Mankind tries to incorporate human gestures into modern technologies like touching movement on screen, virtual reality gaming and sign language prediction. This research aims towards employed on hand gesture recognition for sign language interpretation as a human computer interaction application. Sign Language which uses transmits the sign patterns to convey meaning by hand shapes, orientation and movements to fluently express their thoughts with other person and is normally used by the physically challenged people who cannot speak or hear. Automatic Sign Language which requires robust and accurate techniques for identifying hand signs or a sequence of produced gesture to help interpret their correct meaning. Hand segmentation algorithm where segmentation using different hand detection schemes with required morphological processing. There are many methods which can be used to acquire the respective results depending on its advantage.


Author(s):  
Po-Kuan Huang ◽  
Tung-Yang Lin ◽  
Hsu-Ting Lin ◽  
Chi-Hao Wu ◽  
Ching-Chun Hsiao ◽  
...  

Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5282 ◽  
Author(s):  
Adam Ahmed Qaid MOHAMMED ◽  
Jiancheng Lv ◽  
MD. Sajjatul Islam

Recent research on hand detection and gesture recognition has attracted increasing interest due to its broad range of potential applications, such as human-computer interaction, sign language recognition, hand action analysis, driver hand behavior monitoring, and virtual reality. In recent years, several approaches have been proposed with the aim of developing a robust algorithm which functions in complex and cluttered environments. Although several researchers have addressed this challenging problem, a robust system is still elusive. Therefore, we propose a deep learning-based architecture to jointly detect and classify hand gestures. In the proposed architecture, the whole image is passed through a one-stage dense object detector to extract hand regions, which, in turn, pass through a lightweight convolutional neural network (CNN) for hand gesture recognition. To evaluate our approach, we conducted extensive experiments on four publicly available datasets for hand detection, including the Oxford, 5-signers, EgoHands, and Indian classical dance (ICD) datasets, along with two hand gesture datasets with different gesture vocabularies for hand gesture recognition, namely, the LaRED and TinyHands datasets. Here, experimental results demonstrate that the proposed architecture is efficient and robust. In addition, it outperforms other approaches in both the hand detection and gesture classification tasks.


2013 ◽  
Vol 13 (02) ◽  
pp. 1340001
Author(s):  
SIDDHARTH SWARUP RAUTARAY ◽  
ANUPAM AGRAWAL

Traditional human–computer interaction devices such as the keyboard and mouse become ineffective for an effective interaction with the virtual environment applications because the 3D applications need a new interaction device. An efficient human interaction with the modern virtual environments requires more natural devices. Among them the "Hand Gesture" human–computer interaction modality has recently become of major interest. The main objective of gesture recognition research is to build a system which can recognize human gestures and utilize them to control an application. One of the drawbacks of present gesture recognition systems is being application-dependent which makes it difficult to transfer one gesture control interface into multiple applications. This paper focuses on designing a hand gesture recognition system which is vocabulary independent as well as adaptable to multiple applications. This makes the proposed system vocabulary independent and application independent. The designed system is comprised of the different processing steps like detection, segmentation, tracking, recognition, etc. Vocabulary independence has been incorporated in the proposed system with the help of a robust gesture mapping module that allows the user for cognitive mapping of different gestures to the same command and vice versa. For performance analysis of the proposed system accuracy, recognition rate and command response time have been compared. These parameters have been considered because they analyze the vital impact on the performance of the proposed vocabulary and application-independent hand gesture recognition system.


2013 ◽  
Vol 09 (01) ◽  
pp. 1350007 ◽  
Author(s):  
SIDDHARTH S. RAUTARAY ◽  
ANUPAM AGRAWAL

With the increasing role of computing devices, facilitating natural human computer interaction (HCI) will have a positive impact on their usage and acceptance as a whole. For long time, research on HCI has been restricted to techniques based on the use of keyboard, mouse, etc. Recently, this paradigm has changed. Techniques such as vision, sound, speech recognition allow for much richer form of interaction between the user and machine. The emphasis is to provide a natural form of interface for interaction. Gestures are one of the natural forms of interaction between humans. As gesture commands are found to be natural for humans, the development of gesture control systems for controlling devices have become a popular research topic in recent years. Researchers have proposed different gesture recognition systems which act as an interface for controlling the applications. One of the drawbacks of present gesture recognition systems is application dependence which makes it difficult to transfer one gesture control interface into different applications. This paper focuses on designing a vision-based hand gesture recognition system which is adaptive to different applications thus making the gesture recognition systems to be application adaptive. The designed system comprises different processing steps like detection, segmentation, tracking, recognition, etc. For making the system as application-adaptive, different quantitative and qualitative parameters have been taken into consideration. The quantitative parameters include gesture recognition rate, features extracted and root mean square error of the system while the qualitative parameters include intuitiveness, accuracy, stress/comfort, computational efficiency, user's tolerance, and real-time performance related to the proposed system. These parameters have a vital impact on the performance of the proposed application adaptive hand gesture recognition system.


2016 ◽  
Vol 2016 ◽  
pp. 1-8 ◽  
Author(s):  
Clementine Nyirarugira ◽  
Hyo-rim Choi ◽  
TaeYong Kim

We present a gesture recognition method derived from particle swarm movement for free-air hand gesture recognition. Online gesture recognition remains a difficult problem due to uncertainty in vision-based gesture boundary detection methods. We suggest an automated process of segmenting meaningful gesture trajectories based on particle swarm movement. A subgesture detection and reasoning method is incorporated in the proposed recognizer to avoid premature gesture spotting. Evaluation of the proposed method shows promising recognition results: 97.6% on preisolated gestures, 94.9% on stream gestures with assistive boundary indicators, and 94.2% for blind gesture spotting on digit gesture vocabulary. The proposed recognizer requires fewer computation resources; thus it is a good candidate for real-time applications.


2020 ◽  
Vol 17 (4) ◽  
pp. 497-506
Author(s):  
Sunil Patel ◽  
Ramji Makwana

Automatic classification of dynamic hand gesture is challenging due to the large diversity in a different class of gesture, Low resolution, and it is performed by finger. Due to a number of challenges many researchers focus on this area. Recently deep neural network can be used for implicit feature extraction and Soft Max layer is used for classification. In this paper, we propose a method based on a two-dimensional convolutional neural network that performs detection and classification of hand gesture simultaneously from multimodal Red, Green, Blue, Depth (RGBD) and Optical flow Data and passes this feature to Long-Short Term Memory (LSTM) recurrent network for frame-to-frame probability generation with Connectionist Temporal Classification (CTC) network for loss calculation. We have calculated an optical flow from Red, Green, Blue (RGB) data for getting proper motion information present in the video. CTC model is used to efficiently evaluate all possible alignment of hand gesture via dynamic programming and check consistency via frame-to-frame for the visual similarity of hand gesture in the unsegmented input stream. CTC network finds the most probable sequence of a frame for a class of gesture. The frame with the highest probability value is selected from the CTC network by max decoding. This entire CTC network is trained end-to-end with calculating CTC loss for recognition of the gesture. We have used challenging Vision for Intelligent Vehicles and Applications (VIVA) dataset for dynamic hand gesture recognition captured with RGB and Depth data. On this VIVA dataset, our proposed hand gesture recognition technique outperforms competing state-of-the-art algorithms and gets an accuracy of 86%


2020 ◽  
Vol 29 (6) ◽  
pp. 1153-1164
Author(s):  
Qianyi Xu ◽  
Guihe Qin ◽  
Minghui Sun ◽  
Jie Yan ◽  
Huiming Jiang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document