scholarly journals 2D Information Space Based Action Recognition

Video based human action recognition has attained more attraction from the researchers and it predominates in the field of computer vision and pattern recognition. In this paper we deliver a new approach to suppress the background data and to extract 2D data of foreground human object of the video sequence. A combination of convex hull area, convex hull perimeter, solidity and eccentricity is used to represent the feature vector. Experiments are conducted on Weizmann video dataset to assess how the system is doing. The discriminative nature of the feature vectors assures accuracy in action recognition.

Author(s):  
Abdelouahid Ben Tamou ◽  
Lahoucine Ballihi ◽  
Driss Aboutajdine

In this paper, we present a new approach for human action recognition using [Formula: see text] skeleton joints recovered from RGB-D cameras. We propose a descriptor based on differences of skeleton joints. This descriptor combines two characteristics including static posture and overall dynamics that encode spatial and temporal aspects. Then, we apply the mean function on these characteristics in order to form the feature vector, used as an input to Random Forest classifier for action classification. The experimental results on both datasets: MSR Action 3D dataset and MSR Daily Activity 3D dataset demonstrate that our approach is efficient and gives promising results compared to state-of-the-art approaches.


Author(s):  
L. Nirmala Devi ◽  
A.Nageswar Rao

Human action recognition (HAR) is one of most significant research topics, and it has attracted the concentration of many researchers. Automatic HAR system is applied in several fields like visual surveillance, data retrieval, healthcare, etc. Based on this inspiration, in this chapter, the authors propose a new HAR model that considers an image as input and analyses and exposes the action present in it. Under the analysis phase, they implement two different feature extraction methods with the help of rotation invariant Gabor filter and edge adaptive wavelet filter. For every action image, a new vector called as composite feature vector is formulated and then subjected to dimensionality reduction through principal component analysis (PCA). Finally, the authors employ the most popular supervised machine learning algorithm (i.e., support vector machine [SVM]) for classification. Simulation is done over two standard datasets; they are KTH and Weizmann, and the performance is measured through an accuracy metric.


2015 ◽  
Vol 2015 ◽  
pp. 1-11 ◽  
Author(s):  
Shaoping Zhu ◽  
Limin Xia

A novel method based on hybrid feature is proposed for human action recognition in video image sequences, which includes two stages of feature extraction and action recognition. Firstly, we use adaptive background subtraction algorithm to extract global silhouette feature and optical flow model to extract local optical flow feature. Then we combine global silhouette feature vector and local optical flow feature vector to form a hybrid feature vector. Secondly, in order to improve the recognition accuracy, we use an optimized Multiple Instance Learning algorithm to recognize human actions, in which an Iterative Querying Heuristic (IQH) optimization algorithm is used to train the Multiple Instance Learning model. We demonstrate that our hybrid feature-based action representation can effectively classify novel actions on two different data sets. Experiments show that our results are comparable to, and significantly better than, the results of two state-of-the-art approaches on these data sets, which meets the requirements of stable, reliable, high precision, and anti-interference ability and so forth.


2020 ◽  
Vol 10 (12) ◽  
pp. 4412
Author(s):  
Ammar Mohsin Butt ◽  
Muhammad Haroon Yousaf ◽  
Fiza Murtaza ◽  
Saima Nazir ◽  
Serestina Viriri ◽  
...  

Human action recognition has gathered significant attention in recent years due to its high demand in various application domains. In this work, we propose a novel codebook generation and hybrid encoding scheme for classification of action videos. The proposed scheme develops a discriminative codebook and a hybrid feature vector by encoding the features extracted from CNNs (convolutional neural networks). We explore different CNN architectures for extracting spatio-temporal features. We employ an agglomerative clustering approach for codebook generation, which intends to combine the advantages of global and class-specific codebooks. We propose a Residual Vector of Locally Aggregated Descriptors (R-VLAD) and fuse it with locality-based coding to form a hybrid feature vector. It provides a compact representation along with high order statistics. We evaluated our work on two publicly available standard benchmark datasets HMDB-51 and UCF-101. The proposed method achieves 72.6% and 96.2% on HMDB51 and UCF101, respectively. We conclude that the proposed scheme is able to boost recognition accuracy for human action recognition.


Author(s):  
Anantha Prabha P ◽  
Srimathi R ◽  
Srividhya R ◽  
Sowmiya T G

Human Action Recognition has been an active research topic since early 1980s due to its promising applications in many domains like video indexing, surveillance, gesture recognition, video retrieval and human-computer interactions where the actions in the form of videos or sensor datas are recognized. The extraction of relevant features from the video streams is the most challenging part. With the emergence of advanced artificial intelligence techniques, deep learning methods are adopted to achieve the goal. The proposed system presents a Recurrent Neural Network (RNN) methodology for Human Action Recognition using star skeleton as a representative descriptor of human posture. Star skeleton is the process of jointing the gross contour extremes of a body to its centroid. To use star skeleton as feature for action recognition, the feature is defined as a five-dimensional vector in star fashion because the head and four limbs are usually local extremes of human body. In our project, we assumed an action is composed of a series of star skeletons overtime. Therefore, images expressing human action which are time-sequential are transformed into a feature vector sequence. Then the feature vector sequence must be transformed into symbol sequence so that RNN can model the action. RNN is used because the features extracted are time dependent


Author(s):  
Xueping Liu ◽  
Yibo Li ◽  
Qingjun Wang

Human action recognition based on depth video sequence is an important research direction in the field of computer vision. The present study proposed a classification framework based on hierarchical multi-view to resolve depth video sequence-based action recognition. Herein, considering the distinguishing feature of 3D human action space, we project the 3D human action image to three coordinate planes, so that the 3D depth image is converted to three 2D images, and then feed them to three subnets, respectively. With the increase of the number of layers, the representations of subnets are hierarchically fused to be the inputs of next layers. The final representations of the depth video sequence are fed into a single layer perceptron, and the final result is decided by the time accumulated through the output of the perceptron. We compare with other methods on two publicly available datasets, and we also verify the proposed method through the human action database acquired by our Kinect system. Our experimental results demonstrate that our model has high computational efficiency and achieves the performance of state-of-the-art method.


Sign in / Sign up

Export Citation Format

Share Document