scholarly journals Hand Gesture Recognition for Differently Abled People with Message Integration.

The objective of this paper is to utilize a webcam to lively track the region of interest (ROI), in particular, the hand locale, in the picture extend and recognize hand motion, we use skin colour discovery and also morphology to delete the unnecessary background information from the picture, and afterward use foundation subtraction to recognize the ROI. Next, to stay away from foundation effects on items or commotion influencing the ROI, we utilize the kernelized connection channels (KCF) calculation to follow the identified ROI. The picture size of the ROI is at that point resized to 28x28 and afterward sent into the profound convolutional neural system (CNN), so as to distinguish various hand signals. Two profound CNN designs are created right now are altered from DenseNet . At that point, the above procedure of following and acknowledgment is rehashed to accomplish a moment impact, and the framework's execution proceeds until the hand is removed from the camera.

2012 ◽  
Vol 235 ◽  
pp. 68-73
Author(s):  
Hai Bo Pang ◽  
You Dong Ding

Hand gesture provides an attractive alternative to cumbersome interface devices for human computer interface. Many hand gesture recognition methods using visual analysis have been proposed. In our research, we exploit multiple cues including divergence features, vorticity features and hand motion direction vector. Divergence and vorticity are derived from the optical flow for hand gesture recognition in videos. Then these features are computed by principal component analysis method. The hand tracking algorithm finds the hand centroids for every frame, computes hand motion direction vector. At last, we introduced dynamic time warping method to verify the robustness of our features. Those experimental results demonstrate that the proposed approach yields a satisfactory recognition rate.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6368
Author(s):  
Lianqing Zheng ◽  
Jie Bai ◽  
Xichan Zhu ◽  
Libo Huang ◽  
Chewu Shan ◽  
...  

Hand gesture recognition technology plays an important role in human-computer interaction and in-vehicle entertainment. Under in-vehicle conditions, it is a great challenge to design gesture recognition systems due to variable driving conditions, complex backgrounds, and diversified gestures. In this paper, we propose a gesture recognition system based on frequency-modulated continuous-wave (FMCW) radar and transformer for an in-vehicle environment. Firstly, the original range-Doppler maps (RDMs), range-azimuth maps (RAMs), and range-elevation maps (REMs) of the time sequence of each gesture are obtained by radar signal processing. Then we preprocess the obtained data frames by region of interest (ROI) extraction, vibration removal algorithm, background removal algorithm, and standardization. We propose a transformer-based radar gesture recognition network named RGTNet. It fully extracts and fuses the spatial-temporal information of radar feature maps to complete the classification of various gestures. The experimental results show that our method can better complete the eight gesture classification tasks in the in-vehicle environment. The recognition accuracy is 97.56%.


2019 ◽  
Vol 52 (1) ◽  
pp. 563-583 ◽  
Author(s):  
Fenglin Liu ◽  
Wei Zeng ◽  
Chengzhi Yuan ◽  
Qinghui Wang ◽  
Ying Wang

2016 ◽  
Vol 14 (10) ◽  
pp. 1061-1065 ◽  
Author(s):  
Chin-Shyurng Fahn ◽  
Chang-Yi Kao ◽  
Ching-Bang Yao ◽  
Meng-Luen Wu

Gestures are the simplest way of conveying a message, rather simpler than verbal means. It is the most primitive way of conversation. Gestures can also be the easiest and intuitive way of communicating with a computer, they can be used to communicate or convey information to computers, robots, smart appliances and many other pieces of machinery. It can eliminate the use of mouse and keyboard to some extent. The gestures cited are basically the variable positions as well as orientations of the hand. They can be detected by a simple webcam attached to the computer. The image is first changed into its corresponding RGB values and then to HSV values for better handling and feature recognition. The hand is segregated from the background using feature extraction. Then the values are matched in proximity of the coded values. Then the region of interest is calculated using the concept of convexity and background subtraction. The convex defect helps to define the contour efficiently. This method is invariant for different positions or direction of the gesture. It is able to detect the number of fingers individually and efficiently


2013 ◽  
Vol 11 (5) ◽  
pp. 2634-2640
Author(s):  
Hazem Khaled Mohamed ◽  
S. Sayed ◽  
El Sayed Mostafa ◽  
Hossam Ali

This paper introduces a hand gesture recognition algorithm for Human Computer Interaction using real-time video streaming .The background subtraction technique is used to extract the ROI (Region Of Interest) of the hand. Fingertip is detected using logical heuristics equations that are applied on hand contour , convex hull and convexity defects points. A combination between background subtraction and logical heuristic technique that leads to more accurate results is introduced. Experimental results prove that the proposed algorithm improve the finger's tips detection by 52 % compared to the reference model.


2020 ◽  
Vol 17 (4) ◽  
pp. 497-506
Author(s):  
Sunil Patel ◽  
Ramji Makwana

Automatic classification of dynamic hand gesture is challenging due to the large diversity in a different class of gesture, Low resolution, and it is performed by finger. Due to a number of challenges many researchers focus on this area. Recently deep neural network can be used for implicit feature extraction and Soft Max layer is used for classification. In this paper, we propose a method based on a two-dimensional convolutional neural network that performs detection and classification of hand gesture simultaneously from multimodal Red, Green, Blue, Depth (RGBD) and Optical flow Data and passes this feature to Long-Short Term Memory (LSTM) recurrent network for frame-to-frame probability generation with Connectionist Temporal Classification (CTC) network for loss calculation. We have calculated an optical flow from Red, Green, Blue (RGB) data for getting proper motion information present in the video. CTC model is used to efficiently evaluate all possible alignment of hand gesture via dynamic programming and check consistency via frame-to-frame for the visual similarity of hand gesture in the unsegmented input stream. CTC network finds the most probable sequence of a frame for a class of gesture. The frame with the highest probability value is selected from the CTC network by max decoding. This entire CTC network is trained end-to-end with calculating CTC loss for recognition of the gesture. We have used challenging Vision for Intelligent Vehicles and Applications (VIVA) dataset for dynamic hand gesture recognition captured with RGB and Depth data. On this VIVA dataset, our proposed hand gesture recognition technique outperforms competing state-of-the-art algorithms and gets an accuracy of 86%


Sign in / Sign up

Export Citation Format

Share Document