scholarly journals Human Action Recognition Using Multilevel Depth Motion Maps

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 41811-41822 ◽  
Author(s):  
Xu Weiyao ◽  
Wu Muqing ◽  
Zhao Min ◽  
Liu Yifeng ◽  
Lv Bo ◽  
...  
Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3642
Author(s):  
Mohammad Farhad Bulbul ◽  
Sadiya Tabussum ◽  
Hazrat Ali ◽  
Wenli Zheng ◽  
Mi Young Lee ◽  
...  

This paper proposes an action recognition framework for depth map sequences using the 3D Space-Time Auto-Correlation of Gradients (STACOG) algorithm. First, each depth map sequence is split into two sets of sub-sequences of two different frame lengths individually. Second, a number of Depth Motion Maps (DMMs) sequences from every set are generated and are fed into STACOG to find an auto-correlation feature vector. For two distinct sets of sub-sequences, two auto-correlation feature vectors are obtained and applied gradually to L2-regularized Collaborative Representation Classifier (L2-CRC) for computing a pair of sets of residual values. Next, the Logarithmic Opinion Pool (LOGP) rule is used to combine the two different outcomes of L2-CRC and to allocate an action label of the depth map sequence. Finally, our proposed framework is evaluated on three benchmark datasets named MSR-action 3D dataset, DHA dataset, and UTD-MHAD dataset. We compare the experimental results of our proposed framework with state-of-the-art approaches to prove the effectiveness of the proposed framework. The computational efficiency of the framework is also analyzed for all the datasets to check whether it is suitable for real-time operation or not.


2017 ◽  
Vol 2017 ◽  
pp. 1-6
Author(s):  
Shirui Huo ◽  
Tianrui Hu ◽  
Ce Li

Human action recognition is an important recent challenging task. Projecting depth images onto three depth motion maps (DMMs) and extracting deep convolutional neural network (DCNN) features are discriminant descriptor features to characterize the spatiotemporal information of a specific action from a sequence of depth images. In this paper, a unified improved collaborative representation framework is proposed in which the probability that a test sample belongs to the collaborative subspace of all classes can be well defined and calculated. The improved collaborative representation classifier (ICRC) based on l2-regularized for human action recognition is presented to maximize the likelihood that a test sample belongs to each class, then theoretical investigation into ICRC shows that it obtains a final classification by computing the likelihood for each class. Coupled with the DMMs and DCNN features, experiments on depth image-based action recognition, including MSRAction3D and MSRGesture3D datasets, demonstrate that the proposed approach successfully using a distance-based representation classifier achieves superior performance over the state-of-the-art methods, including SRC, CRC, and SVM.


2019 ◽  
Vol 5 (10) ◽  
pp. 82 ◽  
Author(s):  
Mahmoud Al-Faris ◽  
John Chiverton ◽  
Yanyan Yang ◽  
David Ndzi

Human action recognition (HAR) is an important yet challenging task. This paper presents a novel method. First, fuzzy weight functions are used in computations of depth motion maps (DMMs). Multiple length motion information is also used. These features are referred to as fuzzy weighted multi-resolution DMMs (FWMDMMs). This formulation allows for various aspects of individual actions to be emphasized. It also helps to characterise the importance of the temporal dimension. This is important to help overcome, e.g., variations in time over which a single type of action might be performed. A deep convolutional neural network (CNN) motion model is created and trained to extract discriminative and compact features. Transfer learning is also used to extract spatial information from RGB and depth data using the AlexNet network. Different late fusion techniques are then investigated to fuse the deep motion model with the spatial network. The result is a spatial temporal HAR model. The developed approach is capable of recognising both human action and human–object interaction. Three public domain datasets are used to evaluate the proposed solution. The experimental results demonstrate the robustness of this approach compared with state-of-the art algorithms.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 5597-5608 ◽  
Author(s):  
Runwei Ding ◽  
Qinqin He ◽  
Hong Liu ◽  
Mengyuan Liu

Sign in / Sign up

Export Citation Format

Share Document