Normal graph: Spatial temporal graph convolutional networks based prediction network for skeleton based video anomaly detection

2020 ◽  
Author(s):  
Weixin Luo ◽  
Wen Liu ◽  
Shenghua Gao
2021 ◽  
pp. 1-13
Author(s):  
Jing Bai ◽  
Wentao Yu ◽  
Zhu Xiao ◽  
Vincent Havyarimana ◽  
Amelia C. Regan ◽  
...  

2021 ◽  
Vol 13 (16) ◽  
pp. 3338
Author(s):  
Xiao Xiao ◽  
Zhiling Jin ◽  
Yilong Hui ◽  
Yueshen Xu ◽  
Wei Shao

With the development of sensors and of the Internet of Things (IoT), smart cities can provide people with a variety of information for a more convenient life. Effective on-street parking availability prediction can improve parking efficiency and, at times, alleviate city congestion. Conventional methods of parking availability prediction often do not consider the spatial–temporal features of parking duration distributions. To this end, we propose a parking space prediction scheme called the hybrid spatial–temporal graph convolution networks (HST-GCNs). We use graph convolutional networks and gated linear units (GLUs) with a 1D convolutional neural network to obtain the spatial features and the temporal features, respectively. Then, we construct a spatial–temporal convolutional block to obtain the instantaneous spatial–temporal correlations. Based on the similarity of the parking duration distributions, we propose an attention mechanism called distAtt to measure the similarity of parking duration distributions. Through the distAtt mechanism, we add the long-term spatial–temporal correlations to our spatial–temporal convolutional block, and thus, we can capture complex hybrid spatial–temporal correlations to achieve a higher accuracy of parking availability prediction. Based on real-world datasets, we compare the proposed scheme with the benchmark models. The experimental results show that the proposed scheme has the best performance in predicting the parking occupancy rate.


2020 ◽  
Vol 34 (02) ◽  
pp. 1342-1350 ◽  
Author(s):  
Uttaran Bhattacharya ◽  
Trisha Mittal ◽  
Rohan Chandra ◽  
Tanmay Randhavane ◽  
Aniket Bera ◽  
...  

We present a novel classifier network called STEP, to classify perceived human emotion from gaits, based on a Spatial Temporal Graph Convolutional Network (ST-GCN) architecture. Given an RGB video of an individual walking, our formulation implicitly exploits the gait features to classify the perceived emotion of the human into one of four emotions: happy, sad, angry, or neutral. We train STEP on annotated real-world gait videos, augmented with annotated synthetic gaits generated using a novel generative network called STEP-Gen, built on an ST-GCN based Conditional Variational Autoencoder (CVAE). We incorporate a novel push-pull regularization loss in the CVAE formulation of STEP-Gen to generate realistic gaits and improve the classification accuracy of STEP. We also release a novel dataset (E-Gait), which consists of 4,227 human gaits annotated with perceived emotions along with thousands of synthetic gaits. In practice, STEP can learn the affective features and exhibits classification accuracy of 88% on E-Gait, which is 14–30% more accurate over prior methods.


2020 ◽  
Author(s):  
Zhiwei Hu ◽  
Tao Wu ◽  
Yunan Zhang ◽  
Jintao Li ◽  
Longsheng Jiang

Sign in / Sign up

Export Citation Format

Share Document