gesture spotting
Recently Published Documents


TOTAL DOCUMENTS

54
(FIVE YEARS 8)

H-INDEX

12
(FIVE YEARS 2)

2021 ◽  
Vol 11 (10) ◽  
pp. 4689
Author(s):  
Ngoc-Hoang Nguyen ◽  
Tran-Dac-Thinh Phan ◽  
Soo-Hyung Kim ◽  
Hyung-Jeong Yang ◽  
Guee-Sang Lee

This paper presents a novel approach to continuous dynamic hand gesture recognition. Our approach contains two main modules: gesture spotting and gesture classification. Firstly, the gesture spotting module pre-segments the video sequence with continuous gestures into isolated gestures. Secondly, the gesture classification module identifies the segmented gestures. In the gesture spotting module, the motion of the hand palm and fingers are fed into the Bidirectional Long Short-Term Memory (Bi-LSTM) network for gesture spotting. In the gesture classification module, three residual 3D Convolution Neural Networks based on ResNet architectures (3D_ResNet) and one Long Short-Term Memory (LSTM) network are combined to efficiently utilize the multiple data channels such as RGB, Optical Flow, Depth, and 3D positions of key joints. The promising performance of our approach is obtained through experiments conducted on three public datasets—Chalearn LAP ConGD dataset, 20BN-Jester, and NVIDIA Dynamic Hand gesture Dataset. Our approach outperforms the state-of-the-art methods on the Chalearn LAP ConGD dataset.


2021 ◽  
Vol 11 (4) ◽  
pp. 1933
Author(s):  
Hiroomi Hikawa ◽  
Yuta Ichikawa ◽  
Hidetaka Ito ◽  
Yutaka Maeda

In this paper, a real-time dynamic hand gesture recognition system with gesture spotting function is proposed. In the proposed system, input video frames are converted to feature vectors, and they are used to form a posture sequence vector that represents the input gesture. Then, gesture identification and gesture spotting are carried out in the self-organizing map (SOM)-Hebb classifier. The gesture spotting function detects the end of the gesture by using the vector distance between the posture sequence vector and the winner neuron’s weight vector. The proposed gesture recognition method was tested by simulation and real-time gesture recognition experiment. Results revealed that the system could recognize nine types of gesture with an accuracy of 96.6%, and it successfully outputted the recognition result at the end of gesture using the spotting result.


2021 ◽  
Vol 14 (1) ◽  
pp. 70-91
Author(s):  
Ananya Choudhury ◽  
Kandarpa Kumar Sarma

The task of automatic gesture spotting and segmentation is challenging for determining the meaningful gesture patterns from continuous gesture-based character sequences. This paper proposes a vision-based automatic method that handles hand gesture spotting and segmentation of gestural characters embedded in a continuous character stream simultaneously, by employing a hybrid geometrical and statistical feature set. This framework shall form an important constituent of gesture-based character recognition (GBCR) systems, which has gained tremendous demand lately as assistive aids for overcoming the restraints faced by people with physical impairments. The performance of the proposed system is validated by taking into account the vowels and numerals of Assamese vocabulary. Another attribute to this proposed system is the implementation of an effective hand segmentation module, which enables it to tackle complex background settings.


Sensors ◽  
2020 ◽  
Vol 20 (2) ◽  
pp. 528 ◽  
Author(s):  
Gibran Benitez-Garcia ◽  
Muhammad Haris ◽  
Yoshiyuki Tsuda ◽  
Norimichi Ukita

Gesture spotting is an essential task for recognizing finger gestures used to control in-car touchless interfaces. Automated methods to achieve this task require to detect video segments where gestures are observed, to discard natural behaviors of users’ hands that may look as target gestures, and be able to work online. In this paper, we address these challenges with a recurrent neural architecture for online finger gesture spotting. We propose a multi-stream network merging hand and hand-location features, which help to discriminate target gestures from natural movements of the hand, since these may not happen in the same 3D spatial location. Our multi-stream recurrent neural network (RNN) recurrently learns semantic information, allowing to spot gestures online in long untrimmed video sequences. In order to validate our method, we collect a finger gesture dataset in an in-vehicle scenario of an autonomous car. 226 videos with more than 2100 continuous instances were captured with a depth sensor. On this dataset, our gesture spotting approach outperforms state-of-the-art methods with an improvement of about 10% and 15% of recall and precision, respectively. Furthermore, we demonstrated that by combining with an existing gesture classifier (a 3D Convolutional Neural Network), our proposal achieves better performance than previous hand gesture recognition methods.


Author(s):  
Gibran Benitez-Garcia ◽  
Muhammad Haris ◽  
Yoshiyuki Tsuda ◽  
Norimichi Ukita
Keyword(s):  

Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 3986 ◽  
Author(s):  
Wei-Chieh Chuang ◽  
Wen-Jyi Hwang ◽  
Tsung-Ming Tai ◽  
De-Rong Huang ◽  
Yun-Jie Jhang

The goal of this work is to present a novel continuous finger gesture recognition system based on flex sensors. The system is able to carry out accurate recognition of a sequence of gestures. Wireless smart gloves equipped with flex sensors were implemented for the collection of the training and testing sets. Given the sensory data acquired from the smart gloves, the gated recurrent unit (GRU) algorithm was then adopted for gesture spotting. During the training process for the GRU, the movements associated with different fingers and the transitions between two successive gestures were taken into consideration. On the basis of the gesture spotting results, the maximum a posteriori (MAP) estimation was carried out for the final gesture classification. Because of the effectiveness of the proposed spotting scheme, accurate gesture recognition was achieved even for complicated transitions between successive gestures. From the experimental results, it can be observed that the proposed system is an effective alternative for robust recognition of a sequence of finger gestures.


Author(s):  
Yasutomo Kawanishi ◽  
Chisato Toriyama ◽  
Tomokazu Takahashi ◽  
Daisuke Deguchi ◽  
Ichiro Ide ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document