scholarly journals An Adaptive Construction Method of Hierarchical Spatio-Temporal Index for Vector Data under Peer-to-Peer Networks

2019 ◽  
Vol 8 (11) ◽  
pp. 512 ◽  
Author(s):  
Li ◽  
Wu ◽  
Wu ◽  
Zhao

Spatio-temporal indexing is a key technique in spatio-temporal data storage and management. Indexing methods based on spatial filling curves are popular in research on the spatio-temporal indexing of vector data in the Not Relational (NoSQL) database. However, the existing methods mostly focus on spatial indexing, which makes it difficult to balance the efficiencies of time and space queries. In addition, for non-point elements (line and polygon elements), it remains difficult to determine the optimal index level. To address these issues, this paper proposes an adaptive construction method of hierarchical spatio-temporal index for vector data. Firstly, a joint spatio-temporal information coding based on the combination of the partition and sort key strategies is presented. Secondly, the multilevel expression structure of spatio-temporal elements consisting of point and non-point elements in the joint coding is given. Finally, an adaptive multi-level index tree is proposed to realize the spatio-temporal index (Multi-level Sphere 3, MLS3) based on the spatio-temporal characteristics of geographical entities. Comparison with the XZ3 index algorithm proposed by GeoMesa proved that the MLS3 indexing method not only reasonably expresses the spatio-temporal features of non-point elements and determines their optimal index level, but also avoids storage hotspots while achieving spatio-temporal retrieval with high efficiency.

Author(s):  
Amirhessam Tahmassebi ◽  
Katja Pinker-Domenig ◽  
Anke Meyer-Baese ◽  
Antonio Garcia ◽  
Diego P. Morales ◽  
...  

Author(s):  
Mujtaba Asad ◽  
He Jiang ◽  
Jie Yang ◽  
Enmei Tu ◽  
Aftab A. Malik

Detection of violent human behavior is necessary for public safety and monitoring. However, it demands constant human observation and attention in human-based surveillance systems, which is a challenging task. Autonomous detection of violent human behavior is therefore essential for continuous uninterrupted video surveillance. In this paper, we propose a novel method for violence detection and localization in videos using the fusion of spatio-temporal features and attention model. The model consists of Fusion Convolutional Neural Network (Fusion-CNN), spatio-temporal attention modules and Bi-directional Convolutional LSTMs (BiConvLSTM). The Fusion-CNN learns both spatial and temporal features by combining multi-level inter-layer features from both RGB and Optical flow input frames. The spatial attention module is used to generate an importance mask to focus on the most important areas of the image frame. The temporal attention part, which is based on BiConvLSTM, identifies the most significant video frames which are related to violent activity. The proposed model can also localize and discriminate prominent regions in both spatial and temporal domains, given the weakly supervised training with only video-level classification labels. Experimental results evaluated on different publicly available benchmarking datasets show the superior performance of the proposed model in comparison with the existing methods. Our model achieves the improved accuracies (ACC) of 89.1%, 99.1% and 98.15% for RWF-2000, HockeyFight and Crowd-Violence datasets, respectively. For CCTV-FIGHTS dataset, we choose the mean average precision (mAp) performance metric and our model obtained 80.7% mAp.


2021 ◽  
Author(s):  
Monir Torabian ◽  
Hossein Pourghassem ◽  
Homayoun Mahdavi-Nasab

2021 ◽  
pp. 115472
Author(s):  
Parameshwaran Ramalingam ◽  
Lakshminarayanan Gopalakrishnan ◽  
Manikandan Ramachandran ◽  
Rizwan Patan

2016 ◽  
Vol 12 ◽  
pp. P1115-P1115
Author(s):  
Vera Niederkofler ◽  
Christina Hoeller ◽  
Joerg Neddens ◽  
Ewald Auer ◽  
Heinrich Roemer ◽  
...  

2021 ◽  
Vol 12 (6) ◽  
pp. 1-23
Author(s):  
Shuo Tao ◽  
Jingang Jiang ◽  
Defu Lian ◽  
Kai Zheng ◽  
Enhong Chen

Mobility prediction plays an important role in a wide range of location-based applications and services. However, there are three problems in the existing literature: (1) explicit high-order interactions of spatio-temporal features are not systemically modeled; (2) most existing algorithms place attention mechanisms on top of recurrent network, so they can not allow for full parallelism and are inferior to self-attention for capturing long-range dependence; (3) most literature does not make good use of long-term historical information and do not effectively model the long-term periodicity of users. To this end, we propose MoveNet and RLMoveNet. MoveNet is a self-attention-based sequential model, predicting each user’s next destination based on her most recent visits and historical trajectory. MoveNet first introduces a cross-based learning framework for modeling feature interactions. With self-attention on both the most recent visits and historical trajectory, MoveNet can use an attention mechanism to capture the user’s long-term regularity in a more efficient way. Based on MoveNet, to model long-term periodicity more effectively, we add the reinforcement learning layer and named RLMoveNet. RLMoveNet regards the human mobility prediction as a reinforcement learning problem, using the reinforcement learning layer as the regularization part to drive the model to pay attention to the behavior with periodic actions, which can help us make the algorithm more effective. We evaluate both of them with three real-world mobility datasets. MoveNet outperforms the state-of-the-art mobility predictor by around 10% in terms of accuracy, and simultaneously achieves faster convergence and over 4x training speedup. Moreover, RLMoveNet achieves higher prediction accuracy than MoveNet, which proves that modeling periodicity explicitly from the perspective of reinforcement learning is more effective.


2017 ◽  
Vol 141 ◽  
pp. 120-124 ◽  
Author(s):  
Xiao Yu ◽  
Yue Zhao ◽  
Chao Li ◽  
Chaoquan Hu ◽  
Liang Ma ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document