Boosted key-frame selection and correlated pyramidal motion-feature representation for human action recognition

2013 ◽  
Vol 46 (7) ◽  
pp. 1810-1818 ◽  
Author(s):  
Li Liu ◽  
Ling Shao ◽  
Peter Rockett
Author(s):  
Jiajia Luo ◽  
Wei Wang ◽  
Hairong Qi

Multi-view human action recognition has gained a lot of attention in recent years for its superior performance as compared to single view recognition. In this paper, we propose a new framework for the real-time realization of human action recognition in distributed camera networks (DCNs). We first present a new feature descriptor (Mltp-hist) that is tolerant to illumination change, robust in homogeneous region and computationally efficient. Taking advantage of the proposed Mltp-hist, the noninformative 3-D patches generated from the background can be further removed automatically that effectively highlights the foreground patches. Next, a new feature representation method based on sparse coding is presented to generate the histogram representation of local videos to be transmitted to the base station for classification. Due to the sparse representation of extracted features, the approximation error is reduced. Finally, at the base station, a probability model is produced to fuse the information from various views and a class label is assigned accordingly. Compared to the existing algorithms, the proposed framework has three advantages while having less requirements on memory and bandwidth consumption: 1) no preprocessing is required; 2) communication among cameras is unnecessary; and 3) positions and orientations of cameras do not need to be fixed. We further evaluate the proposed framework on the most popular multi-view action dataset IXMAS. Experimental results indicate that our proposed framework repeatedly achieves state-of-the-art results when various numbers of views are tested. In addition, our approach is tolerant to the various combination of views and benefit from introducing more views at the testing stage. Especially, our results are still satisfactory even when large misalignment exists between the training and testing samples.


2014 ◽  
Vol 11 (01) ◽  
pp. 1450005
Author(s):  
Yangyang Wang ◽  
Yibo Li ◽  
Xiaofei Ji

Visual-based human action recognition is currently one of the most active research topics in computer vision. The feature representation directly has a crucial impact on the performance of the recognition. Feature representation based on bag-of-words is popular in current research, but the spatial and temporal relationship among these features is usually discarded. In order to solve this issue, a novel feature representation based on normalized interest points is proposed and utilized to recognize the human actions. The novel representation is called super-interest point. The novelty of the proposed feature is that the spatial-temporal correlation between the interest points and human body can be directly added to the representation without considering scale and location variance of the points by introducing normalized points clustering. The novelty concerns three tasks. First, to solve the diversity of human location and scale, interest points are normalized based on the normalization of the human region. Second, to obtain the spatial-temporal correlation among the interest points, the normalized points with similar spatial and temporal distance are constructed to a super-interest point by using three-dimensional clustering algorithm. Finally, by describing the appearance characteristic of the super-interest points and location relationship among the super-interest points, a new feature representation is gained. The proposed representation formation sets up the relationship among local features and human figure. Experiments on Weizmann, KTH, and UCF sports dataset demonstrate that the proposed feature is effective for human action recognition.


2021 ◽  
Author(s):  
Hai-Hong Phan ◽  
Trung Tin Nguyen ◽  
Ngo Huu Phuc ◽  
Nguyen Huu Nhan ◽  
Do Minh Hieu ◽  
...  

Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-20 ◽  
Author(s):  
Yang Liu ◽  
Zhaoyang Lu ◽  
Jing Li ◽  
Chao Yao ◽  
Yanzi Deng

Recently, infrared human action recognition has attracted increasing attention for it has many advantages over visible light, that is, being robust to illumination change and shadows. However, the infrared action data is limited until now, which degrades the performance of infrared action recognition. Motivated by the idea of transfer learning, an infrared human action recognition framework using auxiliary data from visible light is proposed to solve the problem of limited infrared action data. In the proposed framework, we first construct a novel Cross-Dataset Feature Alignment and Generalization (CDFAG) framework to map the infrared data and visible light data into a common feature space, where Kernel Manifold Alignment (KEMA) and a dual aligned-to-generalized encoders (AGE) model are employed to represent the feature. Then, a support vector machine (SVM) is trained, using both the infrared data and visible light data, and can classify the features derived from infrared data. The proposed method is evaluated on InfAR, which is a publicly available infrared human action dataset. To build up auxiliary data, we set up a novel visible light action dataset XD145. Experimental results show that the proposed method can achieve state-of-the-art performance compared with several transfer learning and domain adaptation methods.


2011 ◽  
Vol 267 ◽  
pp. 1065-1070 ◽  
Author(s):  
He Jin Yuan ◽  
Cui Ru Wang ◽  
Jun Liu

A novel semi-supervised algorithm based on co-training is proposed in this paper. In the method, the motion energy history image are used as the different feature representation of human action; then the co-training based semi-supervised learning algorithm is utilized to predict the category of unlabeled training examples. And the average motion energy and history images are calculated as the recognition model for each category action. When recognition, the observed action is firstly classified through its correlation coefficients to the prior established templates respectively; then its final category is determined according to the consistency between the classification results of motion energy and motion history images. The experiments on Weizmann dataset demonstrate that our method is effective for human action recognition.


Author(s):  
Bo Lin ◽  
Bin Fang

Automatic human action recognition is a core functionality of systems for video surveillance and human object interaction. In the whole recognition system, feature description and encoding represent two crucial key steps. In order to construct a powerful action recognition framework, it is important that the two steps must provide reliable performance. In this paper, we proposed a new human action feature descriptor which is called spatio-temporal histograms of gradients (SPHOG). SPHOG is based on the spatial and temporal derivation signal, which extracts the gradient changes between consecutive frames. Compared to the traditional descriptors histograms of optical flow, our proposed SPHOG costs less computation resource. In order to incorporate the distribution information of local descriptors into Vector of Locally Aggregated Descriptors (VLAD), which is a popular encoding approach for Bag-of-Feature representation, a Gaussian kernel is implanted to compute the weighted distance histograms of local descriptors. By doing this, the encoding schema for bag-of-feature (BOF) representation is more effective. We validated our proposed algorithm for human action recognition on three public available datasets KTH, UCF Sports and HMDB51. The evaluation experiment results indicate that the proposed descriptor and encoding method can improve the efficiency of human action recognition and the recognition accuracy.


2013 ◽  
Vol 373-375 ◽  
pp. 1188-1191
Author(s):  
Ju Zhong ◽  
Hua Wen Liu ◽  
Chun Li Lin

The extraction methods of both the shape feature based on Fourier descriptors and the motion feature in time domain were introduced. These features were fused to get a hybrid feature which had higher distinguish ability. This combined representation was used for human action recognition. The experimental results show the proposed hybrid feature has efficient recognition performance in the Weizmann action database .


Sign in / Sign up

Export Citation Format

Share Document