Human Action Recognition Using Motion History Image Based Temporal Segmentation

Author(s):  
Shou-Jen Lin ◽  
Mei-Hsuan Chao ◽  
Chao-Yang Lee ◽  
Chu-Sing Yang

A human action recognition system based on image depth is proposed in this paper. Depth information features are not easily disturbed by noise; and due to this characteristic, the system can quickly extract foreground targets. Moreover, the target data, namely, depth and two-dimensional (2D) data, are projected to three orthogonal planes. In this manner, the action featured in the depth motion along the optical axis can clearly describe the trajectory. Based on the change of motion energy and the angle variations of motion orientations, the temporal segmentation (TS) method automatically segments the complex action into several simple movements. Three-dimensional (3D) data is further applied to acquire the three-viewpoint (3V) motion history trajectory, whereby a target’s motion is described through the motion history images (MHIs) from the 3Vs. The weightings corresponding to the gradients of the MHIs are included for determining the viewpoint that bests describe the target’s motion. In terms of feature extraction, the application of multi-resolution motion history histograms can effectively reduce the computational load and achieve a high recognition rate. Experimental results demonstrate that the proposed method can effectively solve the self-occlusion problem.

Author(s):  
MARC BOSCH-JORGE ◽  
ANTONIO-JOSÉ SÁNCHEZ-SALMERÓN ◽  
CARLOS RICOLFE-VIALA

The aim of this work is to present a visual-based human action recognition system which is adapted to constrained embedded devices, such as smart phones. Basically, vision-based human action recognition is a combination of feature-tracking, descriptor-extraction and subsequent classification of image representations, with a color-based identification tool to distinguish between multiple human subjects. Simple descriptors sets were evaluated to optimize recognition rate and performance and two dimensional (2D) descriptors were found to be effective. These sets installed on the latest phones can recognize human actions in videos in less than one second with a success rate of over 82%.


2020 ◽  
Vol 29 (12) ◽  
pp. 2050190
Author(s):  
Amel Ben Mahjoub ◽  
Mohamed Atri

Action recognition is a very effective method of computer vision areas. In the last few years, there has been a growing interest in Deep learning networks as the Long Short–Term Memory (LSTM) architectures due to their efficiency in long-term time sequence processing. In the light of these recent events in deep neural networks, there is now considerable concern about the development of an accurate action recognition approach with low complexity. This paper aims to introduce a method for learning depth activity videos based on the LSTM and the classification fusion. The first step consists in extracting compact depth video features. We start with the calculation of Depth Motion Maps (DMM) from each sequence. Then we encode and concatenate contour and texture DMM characteristics using the histogram-of-oriented-gradient and local-binary-patterns descriptors. The second step is the depth video classification based on the naive Bayes fusion approach. Training three classifiers, which are the collaborative representation classifier, the kernel-based extreme learning machine and the LSTM, is done separately to get classification scores. Finally, we fuse the classification score outputs of all classifiers with the naive Bayesian method to get a final predicted label. Our proposed method achieves a significant improvement in the recognition rate compared to previous work that has used Kinect v2 and UTD-MHAD human action datasets.


2013 ◽  
Vol 18 (2-3) ◽  
pp. 49-60 ◽  
Author(s):  
Damian Dudzńiski ◽  
Tomasz Kryjak ◽  
Zbigniew Mikrut

Abstract In this paper a human action recognition algorithm, which uses background generation with shadow elimination, silhouette description based on simple geometrical features and a finite state machine for recognizing particular actions is described. The performed tests indicate that this approach obtains a 81 % correct recognition rate allowing real-time image processing of a 360 X 288 video stream.


Video based human action recognition has attained more attraction from the researchers and it predominates in the field of computer vision and pattern recognition. In this paper we deliver a new approach to suppress the background data and to extract 2D data of foreground human object of the video sequence. A combination of convex hull area, convex hull perimeter, solidity and eccentricity is used to represent the feature vector. Experiments are conducted on Weizmann video dataset to assess how the system is doing. The discriminative nature of the feature vectors assures accuracy in action recognition.


2015 ◽  
Vol 42 (1) ◽  
pp. 138-143
Author(s):  
ByoungChul Ko ◽  
Mincheol Hwang ◽  
Jae-Yeal Nam

2014 ◽  
Vol 644-650 ◽  
pp. 4162-4166
Author(s):  
Dan Dan Guo ◽  
Xi’an Zhu

An effective Human action recognition method based on the human skeletal information which is extracted by Kinect depth sensor is proposed in this paper. Skeleton’s 3D space coordinates and the angles between nodes of human related actions are collected as action characteristics through the research of human skeletal structure, node data and research on human actions. First, 3D information of human skeletons is acquired by Kinect depth sensors and the cosine of relevant nodes is calculated. Then human skeletal information within the time prior to current state is stored in real time. Finally, the relevant locations of the skeleton nodes and the variation of the cosine of skeletal joints within a certain time are analyzed to recognize the human motion. This algorithm has higher adaptability and practicability because of the complicated sample trainings and recognizing processes of traditional method is not taken up. The results of the experiment indicate that this method is with high recognition rate.


Sign in / Sign up

Export Citation Format

Share Document