scholarly journals Combining appearance and motion for human action classification in videos

Author(s):  
Paramveer S. Dhillon ◽  
Sebastian Nowozin ◽  
Christoph H. Lampert
2011 ◽  
Vol 38 (5) ◽  
pp. 5125-5128 ◽  
Author(s):  
Loris Nanni ◽  
Sheryl Brahnam ◽  
Alessandra Lumini

2013 ◽  
Vol 753-755 ◽  
pp. 3064-3067
Author(s):  
Ju Zhong ◽  
Ye Zi Sheng ◽  
Chun Li Lin ◽  
Nai Dong Cui

Double-direction two-dimensional Maximum Scatter Difference (2D2MSD) based on Maximum Scatter Difference (MSD) was proposed,which overcame the small sample size problem of LDA, and data were more concise. In the Weizmann human action database, experimental results showed the algorithm was fast, the average recognition rate reached 92% and the highest recognition rate reached 100%.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5613
Author(s):  
Amirreza Farnoosh ◽  
Zhouping Wang ◽  
Shaotong Zhu ◽  
Sarah Ostadabbas

We introduce a generative Bayesian switching dynamical model for action recognition in 3D skeletal data. Our model encodes highly correlated skeletal data into a few sets of low-dimensional switching temporal processes and from there decodes to the motion data and their associated action labels. We parameterize these temporal processes with regard to a switching deep autoregressive prior to accommodate both multimodal and higher-order nonlinear inter-dependencies. This results in a dynamical deep generative latent model that parses meaningful intrinsic states in skeletal dynamics and enables action recognition. These sequences of states provide visual and quantitative interpretations about motion primitives that gave rise to each action class, which have not been explored previously. In contrast to previous works, which often overlook temporal dynamics, our method explicitly model temporal transitions and is generative. Our experiments on two large-scale 3D skeletal datasets substantiate the superior performance of our model in comparison with the state-of-the-art methods. Specifically, our method achieved 6.3% higher action classification accuracy (by incorporating a dynamical generative framework), and 3.5% better predictive error (by employing a nonlinear second-order dynamical transition model) when compared with the best-performing competitors.


Sign in / Sign up

Export Citation Format

Share Document