A compact optical flowbased motion representation for real-time action recognition in surveillance scenes

Author(s):  
Shiquan Wang ◽  
Kaiqi Huang ◽  
Tieniu Tan
2021 ◽  
Vol 11 (11) ◽  
pp. 4940
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

The field of research related to video data has difficulty in extracting not only spatial but also temporal features and human action recognition (HAR) is a representative field of research that applies convolutional neural network (CNN) to video data. The performance for action recognition has improved, but owing to the complexity of the model, some still limitations to operation in real-time persist. Therefore, a lightweight CNN-based single-stream HAR model that can operate in real-time is proposed. The proposed model extracts spatial feature maps by applying CNN to the images that develop the video and uses the frame change rate of sequential images as time information. Spatial feature maps are weighted-averaged by frame change, transformed into spatiotemporal features, and input into multilayer perceptrons, which have a relatively lower complexity than other HAR models; thus, our method has high utility in a single embedded system connected to CCTV. The results of evaluating action recognition accuracy and data processing speed through challenging action recognition benchmark UCF-101 showed higher action recognition accuracy than the HAR model using long short-term memory with a small amount of video frames and confirmed the real-time operational possibility through fast data processing speed. In addition, the performance of the proposed weighted mean-based HAR model was verified by testing it in Jetson NANO to confirm the possibility of using it in low-cost GPU-based embedded systems.


Author(s):  
Songrui Guo ◽  
Huawei Pan ◽  
Guanghua Tan ◽  
Lin Chen ◽  
Chunming Gao

Human action recognition is very important and significant research work in numerous fields of science, for example, human–computer interaction, computer vision and crime analysis. In recent years, relative geometry features have been widely applied to the description of relative relation of body motion. It brings many benefits to action recognition such as clear description, abundant features etc. But the obvious disadvantage is that the extracted features severely rely on the local coordinate system. It is difficult to find a bijection between relative geometry and skeleton motion. To overcome this problem, many previous methods use relative rotation and translation between all skeleton pairs to increase robustness. In this paper we present a new motion representation method. It establishes a motion model based on the relative geometry with the aid of special orthogonal group SO(3). At the same time, we proved that this motion representation method can establish a bijection between relative geometry and motion of skeleton pairs. After the motion representation method in this paper is used, the computation cost of action recognition reduces from the two-way relative motion (motion from A to B and B to A) to one-way relative motion (motion from A to B or B to A) between any skeleton pair, namely, permutation problem [Formula: see text] is simplified into combinatorics problem [Formula: see text]. Finally, the experimental results of the three motion datasets are all superior to present skeleton-based action recognition methods.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Alexandros Andre Chaaraoui ◽  
Francisco Flórez-Revuelta

This paper presents a novel silhouette-based feature for vision-based human action recognition, which relies on the contour of the silhouette and a radial scheme. Its low-dimensionality and ease of extraction result in an outstanding proficiency for real-time scenarios. This feature is used in a learning algorithm that by means of model fusion of multiple camera streams builds a bag of key poses, which serves as a dictionary of known poses and allows converting the training sequences into sequences of key poses. These are used in order to perform action recognition by means of a sequence matching algorithm. Experimentation on three different datasets returns high and stable recognition rates. To the best of our knowledge, this paper presents the highest results so far on the MuHAVi-MAS dataset. Real-time suitability is given, since the method easily performs above video frequency. Therefore, the related requirements that applications as ambient-assisted living services impose are successfully fulfilled.


IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 51708-51720 ◽  
Author(s):  
Hejun Wu ◽  
Zhenye Huang ◽  
Biao Hu ◽  
Zhi Yu ◽  
Xiying Li ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document