Improving Bag-of-Visual-Words model using visual n-grams for human action classification

2018 ◽  
Vol 92 ◽  
pp. 182-191 ◽  
Author(s):  
Ruber Hernández-García ◽  
Julián Ramos-Cózar ◽  
Nicolás Guil ◽  
Edel García-Reyes ◽  
Hichem Sahli
2014 ◽  
Vol 599-601 ◽  
pp. 1571-1574
Author(s):  
Jia Ding ◽  
Yang Yi ◽  
Ze Min Qiu ◽  
Jun Shi Liu

Human action recognition in videos plays an important role in the field of computer vision and image understanding. A novel method of multi-channel bag of visual words and multiple kernel learning is proposed in this paper. The videos are described by multi-channel bag of visual words, and a multiple kernel learning classifier is used for action classification, in which each kernel function of the classifier corresponds to a video channel in order to avoid the noise interference from other channels. The proposed approach improves the ability in distinguishing easily confused actions. Experiments on KTH show that the presented method achieves remarkable performance on the average recognition rate, and obtains comparable recognition rate with state-of-the-art methods.


Sensors ◽  
2019 ◽  
Vol 19 (12) ◽  
pp. 2790 ◽  
Author(s):  
Saima Nazir ◽  
Muhammad Haroon Yousaf ◽  
Jean-Christophe Nebel ◽  
Sergio A. Velastin

Human action recognition (HAR) has emerged as a core research domain for video understanding and analysis, thus attracting many researchers. Although significant results have been achieved in simple scenarios, HAR is still a challenging task due to issues associated with view independence, occlusion and inter-class variation observed in realistic scenarios. In previous research efforts, the classical bag of visual words approach along with its variations has been widely used. In this paper, we propose a Dynamic Spatio-Temporal Bag of Expressions (D-STBoE) model for human action recognition without compromising the strengths of the classical bag of visual words approach. Expressions are formed based on the density of a spatio-temporal cube of a visual word. To handle inter-class variation, we use class-specific visual word representation for visual expression generation. In contrast to the Bag of Expressions (BoE) model, the formation of visual expressions is based on the density of spatio-temporal cubes built around each visual word, as constructing neighborhoods with a fixed number of neighbors could include non-relevant information making a visual expression less discriminative in scenarios with occlusion and changing viewpoints. Thus, the proposed approach makes the model more robust to occlusion and changing viewpoint challenges present in realistic scenarios. Furthermore, we train a multi-class Support Vector Machine (SVM) for classifying bag of expressions into action classes. Comprehensive experiments on four publicly available datasets: KTH, UCF Sports, UCF11 and UCF50 show that the proposed model outperforms existing state-of-the-art human action recognition methods in term of accuracy to 99.21%, 98.60%, 96.94 and 94.10%, respectively.


2010 ◽  
Vol 7 (2) ◽  
pp. 366-370 ◽  
Author(s):  
Sheng Xu ◽  
Tao Fang ◽  
Deren Li ◽  
Shiwei Wang

Sign in / Sign up

Export Citation Format

Share Document