scholarly journals Spatio-Temporal Pyramid Graph Convolutions for Human Action Recognition and Postural Assessment

Author(s):  
Behnoosh Parsa ◽  
Athma Narayanan ◽  
Behzad Dariush
2021 ◽  
Author(s):  
Jawad Khan

Recognition of human actions and associated interactions with objects and the environment is animportant problem in computer vision due to its potential applications in a variety of domains. Themost versatile methods can generalize to various environments and deal with cluttered backgrounds,occlusions, and viewpoint variations. Among them, methods based on graph convolutionalnetworks that extract features from the skeleton have demonstrated promising performance. In thispaper, we propose a novel Spatio-Temporal Pyramid Graph Convolutional Network (ST-PGN) foronline action recognition for ergonomic risk assessment that enables the use of features from alllevels of the skeleton feature hierarchy.


2020 ◽  
Vol 79 (17-18) ◽  
pp. 12349-12371
Author(s):  
Qingshan She ◽  
Gaoyuan Mu ◽  
Haitao Gan ◽  
Yingle Fan

2020 ◽  
Vol 10 (12) ◽  
pp. 4412
Author(s):  
Ammar Mohsin Butt ◽  
Muhammad Haroon Yousaf ◽  
Fiza Murtaza ◽  
Saima Nazir ◽  
Serestina Viriri ◽  
...  

Human action recognition has gathered significant attention in recent years due to its high demand in various application domains. In this work, we propose a novel codebook generation and hybrid encoding scheme for classification of action videos. The proposed scheme develops a discriminative codebook and a hybrid feature vector by encoding the features extracted from CNNs (convolutional neural networks). We explore different CNN architectures for extracting spatio-temporal features. We employ an agglomerative clustering approach for codebook generation, which intends to combine the advantages of global and class-specific codebooks. We propose a Residual Vector of Locally Aggregated Descriptors (R-VLAD) and fuse it with locality-based coding to form a hybrid feature vector. It provides a compact representation along with high order statistics. We evaluated our work on two publicly available standard benchmark datasets HMDB-51 and UCF-101. The proposed method achieves 72.6% and 96.2% on HMDB51 and UCF101, respectively. We conclude that the proposed scheme is able to boost recognition accuracy for human action recognition.


Sign in / Sign up

Export Citation Format

Share Document