Gesture Recognition Via Learning Deep Spatio-Temporal Features In Wi-Fi Sensing

Author(s):  
Jianxiao Xie ◽  
Wei Ye ◽  
Kai Xu

Abstract Internet of Things (IoT) expects to incorporate massive machine-type (MCT) devices, such as vehicles, sensors, and wearable devices, which brings a large number of application tasks that need to be processed. Additionally, data collected from various devices needs to be executed and processed in a timely, reliable, and efficient manner. Gesture recognition has enabled IoT applications such as human-computer interaction and virtual reality. In this work, we propose a cross-domain device-free gesture recognition (DFGR) model, that exploits 3D-CNN to obtain spatiotemporal features in Wi-Fi sensing. To adapt the sensing data to the 3D model, we carry out 3D data segment and supplement in addition to signal denoising and time-frequency transformation. We demonstrate that our proposed model outperforms the state-of-the-art method in the application of DFGR even cross 3 domain factors simultaneously, and is easy to converge and convenient for training with a less complicated hierarchical structure.

2021 ◽  
Vol 18 (2) ◽  
pp. 186-199
Author(s):  
Jie Wang ◽  
Zhouhua Ran ◽  
Qinghua Gao ◽  
Xiaorui Ma ◽  
Miao Pan ◽  
...  

2017 ◽  
Author(s):  
◽  
G. Quiroz

One of the most interesting brain machine interface (BMI) applications, is the control of assistive devices for rehabilitation of neuromotor pathologies. This means that assistive devices (prostheses, orthoses, or exoskeletons) are able to detect user motion intention, by the acquisition and interpretation of electroencephalographic (EEG) signals. Such interpretation is based on the time, frequency or space features of the EEG signals. For this reason, in this paper a coherence-based EEG study is proposed during locomotion that along with the graph theory allows to establish spatio-temporal parameters that are characteristic in this study. The results show that along with the temporal features of the signal it is possible to find spatial patterns in order to classify motion tasks of interest. In this manner, the connectivity analysis alongside graphs provides reliable information about the spatio-temporal characteristics of the neural activity, showing a dynamic pattern in the connectivity during locomotions tasks.


2021 ◽  
Author(s):  
Zhenyue Gao ◽  
Jianqiang Xue ◽  
Jianxing Zhang ◽  
Wendong Xiao

Abstract Accurate sensing and understanding of gestures can improve the quality of human-computer interaction, and has great theoretical significance and application potentials in the fields of smart home, assisted medical care, and virtual reality. Device-free wireless gesture recognition based on WiFi Channel State Information (CSI) requires no sensors, and has a series of advantages such as permission for non-line-of-sight scenario, low cost, preserving for personal privacy and working in the dark night. Although most of the current gesture recognition approaches based on WiFi CSI have achieved good performance, they are difficult to adapt to the new domains. Therefore, this paper proposes ML-WiGR, an approach for device-free gesture recognition in cross-domain applications. ML-WiGR applies convolutional neural networks (CNN) and long short-term memory (LSTM) neural networks as the basic model for gesture recognition to extract spatial and temporal features. Combined with the meta learning training mechanism, the approach dynamically adjusts the learning rate and meta learning rate in training process adaptively, and optimizes the initial parameters of a basic model for gesture recognition, only using a few samples and several iterations to adapt to new domain. In the experiments, we validate the approach under a variety of scenarios. The results show that ML-WiGR can achieve comparable performance against existing approaches with only a small number of samples for training in cross domains.


2020 ◽  
Vol 10 (11) ◽  
pp. 3680 ◽  
Author(s):  
Chunyong Ma ◽  
Shengsheng Zhang ◽  
Anni Wang ◽  
Yongyang Qi ◽  
Ge Chen

Dynamic hand gesture recognition based on one-shot learning requires full assimilation of the motion features from a few annotated data. However, how to effectively extract the spatio-temporal features of the hand gestures remains a challenging issue. This paper proposes a skeleton-based dynamic hand gesture recognition using an enhanced network (GREN) based on one-shot learning by improving the memory-augmented neural network, which can rapidly assimilate the motion features of dynamic hand gestures. Besides, the network effectively combines and stores the shared features between dissimilar classes, which lowers the prediction error caused by the unnecessary hyper-parameters updating, and improves the recognition accuracy with the increase of categories. In this paper, the public dynamic hand gesture database (DHGD) is used for the experimental comparison of the state-of-the-art performance of the GREN network, and although only 30% of the dataset was used for training, the accuracy of skeleton-based dynamic hand gesture recognition reached 82.29% based on one-shot learning. Experiments with the Microsoft Research Asia (MSRA) hand gesture dataset verified the robustness of the GREN network. The experimental results demonstrate that the GREN network is feasible for skeleton-based dynamic hand gesture recognition based on one-shot learning.


Sign in / Sign up

Export Citation Format

Share Document