Spatio-Temporal Slowfast Self-Attention Network For Action Recognition

Author(s):  
Myeongjun Kim ◽  
Taehun Kim ◽  
Daijin Kim
Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 1005
Author(s):  
Pau Climent-Pérez ◽  
Francisco Florez-Revuelta

The potential benefits of recognising activities of daily living from video for active and assisted living have yet to be fully untapped. These technologies can be used for behaviour understanding, and lifelogging for caregivers and end users alike. The recent publication of realistic datasets for this purpose, such as the Toyota Smarthomes dataset, calls for pushing forward the efforts to improve action recognition. Using the separable spatio-temporal attention network proposed in the literature, this paper introduces a view-invariant normalisation of skeletal pose data and full activity crops for RGB data, which improve the baseline results by 9.5% (on the cross-subject experiments), outperforming state-of-the-art techniques in this field when using the original unmodified skeletal data in dataset. Our code and data are available online.


Author(s):  
Yaqing Hou ◽  
Hua Yu ◽  
Dongsheng Zhou ◽  
Pengfei Wang ◽  
Hongwei Ge ◽  
...  

AbstractIn the study of human action recognition, two-stream networks have made excellent progress recently. However, there remain challenges in distinguishing similar human actions in videos. This paper proposes a novel local-aware spatio-temporal attention network with multi-stage feature fusion based on compact bilinear pooling for human action recognition. To elaborate, taking two-stream networks as our essential backbones, the spatial network first employs multiple spatial transformer networks in a parallel manner to locate the discriminative regions related to human actions. Then, we perform feature fusion between the local and global features to enhance the human action representation. Furthermore, the output of the spatial network and the temporal information are fused at a particular layer to learn the pixel-wise correspondences. After that, we bring together three outputs to generate the global descriptors of human actions. To verify the efficacy of the proposed approach, comparison experiments are conducted with the traditional hand-engineered IDT algorithms, the classical machine learning methods (i.e., SVM) and the state-of-the-art deep learning methods (i.e., spatio-temporal multiplier networks). According to the results, our approach is reported to obtain the best performance among existing works, with the accuracy of 95.3% and 72.9% on UCF101 and HMDB51, respectively. The experimental results thus demonstrate the superiority and significance of the proposed architecture in solving the task of human action recognition.


2021 ◽  
Author(s):  
Yongkang Huang ◽  
Meiyu Liang

Abstract Inspired by the wide application of transformer in computer vision and its excellent ability in temporal feature learning. This paper proposes a novel and efficient spatio-temporal residual attention network for student action recognition in classroom teaching video. It first fuses 2D spatial convolution and 1D temporal convolution to study spatio-temporal feature, then combines the powerful Reformer to better study the deeper spatio-temporal characteristics with visual significance of student classroom action. Based on the spatio-temporal residual attention network, a single person action recognition model in classroom teaching video is proposed. Considering that there are often multiple students in the classroom video scene, on the basis of single person action recognition, combined with object detection and tracking technology, the association of temporal and spatial characteristics of the same student targets is established, so as to realize the multi-student action recognition in classroom video scene. The experimental results on classroom teaching video dataset and public video dataset show that the proposed model achieves higher action recognition performance than the existing excellent models and methods.


2014 ◽  
Vol 281 ◽  
pp. 295-309 ◽  
Author(s):  
Xiantong Zhen ◽  
Ling Shao ◽  
Xuelong Li

Sign in / Sign up

Export Citation Format

Share Document