2014 ◽  
Vol 11 (01) ◽  
pp. 1450005
Author(s):  
Yangyang Wang ◽  
Yibo Li ◽  
Xiaofei Ji

Visual-based human action recognition is currently one of the most active research topics in computer vision. The feature representation directly has a crucial impact on the performance of the recognition. Feature representation based on bag-of-words is popular in current research, but the spatial and temporal relationship among these features is usually discarded. In order to solve this issue, a novel feature representation based on normalized interest points is proposed and utilized to recognize the human actions. The novel representation is called super-interest point. The novelty of the proposed feature is that the spatial-temporal correlation between the interest points and human body can be directly added to the representation without considering scale and location variance of the points by introducing normalized points clustering. The novelty concerns three tasks. First, to solve the diversity of human location and scale, interest points are normalized based on the normalization of the human region. Second, to obtain the spatial-temporal correlation among the interest points, the normalized points with similar spatial and temporal distance are constructed to a super-interest point by using three-dimensional clustering algorithm. Finally, by describing the appearance characteristic of the super-interest points and location relationship among the super-interest points, a new feature representation is gained. The proposed representation formation sets up the relationship among local features and human figure. Experiments on Weizmann, KTH, and UCF sports dataset demonstrate that the proposed feature is effective for human action recognition.


2014 ◽  
Vol 36 ◽  
pp. 221-227 ◽  
Author(s):  
Antonio W. Vieira ◽  
Erickson R. Nascimento ◽  
Gabriel L. Oliveira ◽  
Zicheng Liu ◽  
Mario F.M. Campos

Author(s):  
Maxime Devanne ◽  
Hazem Wannous ◽  
Stefano Berretti ◽  
Pietro Pala ◽  
Mohamed Daoudi ◽  
...  

2014 ◽  
Vol 577 ◽  
pp. 659-663
Author(s):  
Jing Hu ◽  
Xiang Qi ◽  
Jian Feng Chen

Human action recognition belongs to the senior visual analysis of computer vision, which involves image processing, artificial intelligence, pattern recognition and so on, is becoming one of the most hot research topic in recent years. In this paper, on the basis of comparative analysis and study towards current methods related to human action recognition, we propose a novel fights behavior detection method which is based on spatial-temporal interest point. Since most information of human action in video are indicated by the space-time interest points of video, we combine spatial-temporal features with motion energy image to describe information of video, and local spatial-temporal features are applied to extract fights behavior model by bags of words. Experimental results show that this method can achieve high accuracy and certain practical value.


Sign in / Sign up

Export Citation Format

Share Document