scholarly journals STS: Spatial–Temporal–Semantic Personalized Location Recommendation

2020 ◽  
Vol 9 (9) ◽  
pp. 538 ◽  
Author(s):  
Wenchao Li ◽  
Xin Liu ◽  
Chenggang Yan ◽  
Guiguang Ding ◽  
Yaoqi Sun ◽  
...  

The rapidly growing location-based social network (LBSN) has become a promising platform for studying users’ mobility patterns. Many online applications can be built based on such studies, among which, recommending locations is of particular interest. Previous studies have shown the importance of spatial and temporal influences on location recommendation; however, most existing approaches build a universal spatial–temporal model for all users despite the fact that users always demonstrate heterogeneous check-in behavior patterns. In order to realize truly personalized location recommendations, we propose a Gaussian process based model for each user to systematically and non-linearly combine temporal and spatial information to predict the user’s displacement from their currently checked-in location to the next one. The locations whose distances to the user’s current checked-in location are the closest to the predicted displacement are recommended. We also propose an enhancement to take into account category information of locations for semantic-aware recommendation. A unified recommendation framework called spatial–temporal–semantic (STS) is introduced to combine displacement prediction and the semantic-aware enhancement to provide final top-N recommendation. Extensive experiments over real datasets show that the proposed STS framework significantly outperforms the state-of-the-art location recommendation models in terms of precision and mean reciprocal rank (MRR).

2021 ◽  
Vol 11 (7) ◽  
pp. 3285
Author(s):  
Ze Pan ◽  
Zheng Tan ◽  
Qunbo Lv

The multi-frame super-resolution techniques have been prosperous over the past two decades. However, little attention has been paid to the combination of deep learning and multi-frame super-resolution. One reason is that most deep learning-based super-resolution methods cannot handle variant numbers of input frames. Another reason is that it is hard to capture accurate temporal and spatial information because of the misalignment of input images. To solve these problems, we propose an optical-flow-based multi-frame super-resolution framework, which is capable of dealing with various numbers of input frames. This framework enables to make full use of the input frames, allowing it to obtain better performance. In addition, we use a spatial subpixel alignment module for more accurate subpixel-wise spatial alignment and introduce a dual weighting module to generate weights for temporal fusion. Both two modules lead to more effective and accurate temporal fusion. We compare our method with other state-of-the-art methods and conduct ablation studies on our method. The results of qualitative and quantitative analyses show that our method achieves state-of-the-art performances, demonstrating the advantage of the designed framework and the necessity of proposed modules.


Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3226
Author(s):  
Huafeng Wang ◽  
Tao Xia ◽  
Hanlin Li ◽  
Xianfeng Gu ◽  
Weifeng Lv ◽  
...  

A very challenging task for action recognition concerns how to effectively extract and utilize the temporal and spatial information of video (especially temporal information). To date, many researchers have proposed various spatial-temporal convolution structures. Despite their success, most models are limited in further performance especially on those datasets that are highly time-dependent due to their failure to identify the fusion relationship between the spatial and temporal features inside the convolution channel. In this paper, we proposed a lightweight and efficient spatial-temporal extractor, denoted as Channel-Wise Spatial-Temporal Aggregation block (CSTA block), which could be flexibly plugged in existing 2D CNNs (denoted by CSTANet). The CSTA Block utilizes two branches to model spatial-temporal information separately. In temporal branch, It is equipped with a Motion Attention Module (MA), which is used to enhance the motion regions in a given video. Then, we introduced a Spatial-Temporal Channel Attention (STCA) module, which could aggregate spatial-temporal features of each block channel-wisely in a self-adaptive and trainable way. The final experimental results demonstrate that the proposed CSTANet achieved the state-of-the-art results on EGTEA Gaze++ and Diving48 datasets, and obtained competitive results on Something-Something V1&V2 at the less computational cost.


Author(s):  
T. A. Welton

Various authors have emphasized the spatial information resident in an electron micrograph taken with adequately coherent radiation. In view of the completion of at least one such instrument, this opportunity is taken to summarize the state of the art of processing such micrographs. We use the usual symbols for the aberration coefficients, and supplement these with £ and 6 for the transverse coherence length and the fractional energy spread respectively. He also assume a weak, biologically interesting sample, with principal interest lying in the molecular skeleton remaining after obvious hydrogen loss and other radiation damage has occurred.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yangfan Xu ◽  
Xianqun Fan ◽  
Yang Hu

AbstractEnzyme-catalyzed proximity labeling (PL) combined with mass spectrometry (MS) has emerged as a revolutionary approach to reveal the protein-protein interaction networks, dissect complex biological processes, and characterize the subcellular proteome in a more physiological setting than before. The enzymatic tags are being upgraded to improve temporal and spatial resolution and obtain faster catalytic dynamics and higher catalytic efficiency. In vivo application of PL integrated with other state of the art techniques has recently been adapted in live animals and plants, allowing questions to be addressed that were previously inaccessible. It is timely to summarize the current state of PL-dependent interactome studies and their potential applications. We will focus on in vivo uses of newer versions of PL and highlight critical considerations for successful in vivo PL experiments that will provide novel insights into the protein interactome in the context of human diseases.


Author(s):  
XIAN WU ◽  
JIANHUANG LAI ◽  
PONG C. YUEN

This paper proposes a novel approach for video-shot transition detection using spatio-temporal saliency. Both temporal and spatial information are combined to generate a saliency map, and features are available based on the change of saliency. Considering the context of shot changes, a statistical detector is constructed to determine all types of shot transitions by the minimization of the detection-error probability simultaneously under the same framework. The evaluation performed on videos of various content types demonstrates that the proposed approach outperforms a more recent method and two publicly available systems, namely VideoAnnex and VCM.


2021 ◽  
Vol 7 (4) ◽  
pp. 1-24
Author(s):  
Douglas Do Couto Teixeira ◽  
Aline Carneiro Viana ◽  
Jussara M. Almeida ◽  
Mrio S. Alvim

Predicting mobility-related behavior is an important yet challenging task. On the one hand, factors such as one’s routine or preferences for a few favorite locations may help in predicting their mobility. On the other hand, several contextual factors, such as variations in individual preferences, weather, traffic, or even a person’s social contacts, can affect mobility patterns and make its modeling significantly more challenging. A fundamental approach to study mobility-related behavior is to assess how predictable such behavior is, deriving theoretical limits on the accuracy that a prediction model can achieve given a specific dataset. This approach focuses on the inherent nature and fundamental patterns of human behavior captured in that dataset, filtering out factors that depend on the specificities of the prediction method adopted. However, the current state-of-the-art method to estimate predictability in human mobility suffers from two major limitations: low interpretability and hardness to incorporate external factors that are known to help mobility prediction (i.e., contextual information). In this article, we revisit this state-of-the-art method, aiming at tackling these limitations. Specifically, we conduct a thorough analysis of how this widely used method works by looking into two different metrics that are easier to understand and, at the same time, capture reasonably well the effects of the original technique. We evaluate these metrics in the context of two different mobility prediction tasks, notably, next cell and next distinct cell prediction, which have different degrees of difficulty. Additionally, we propose alternative strategies to incorporate different types of contextual information into the existing technique. Our evaluation of these strategies offer quantitative measures of the impact of adding context to the predictability estimate, revealing the challenges associated with doing so in practical scenarios.


Author(s):  
Lianli Gao ◽  
Zhilong Zhou ◽  
Heng Tao Shen ◽  
Jingkuan Song

Image edge detection is considered as a cornerstone task in computer vision. Due to the nature of hierarchical representations learned in CNN, it is intuitive to design side networks utilizing the richer convolutional features to improve the edge detection. However, there is no consensus way to integrate the hierarchical information. In this paper, we propose an effective and end-to-end framework, named Bidirectional Additive Net (BAN), for image edge detection. In the proposed framework, we focus on two main problems: 1) how to design a universal network for incorporating hierarchical information sufficiently; and 2) how to achieve effective information flow between different stages and gradually improve the edge map stage by stage. To tackle these problems, we design a consecutive bottom-up and top-down architecture, where a bottom-up branch can gradually remove detailed or sharp boundaries to enable accurate edge detection and a top-down branch offers a chance of error-correcting by revisiting the low-level features that contain rich textual and spatial information. And attended additive module (AAM) is designed to cumulatively refine edges by selecting pivotal features in each stage. Experimental results show that our proposed methods can improve the edge detection performance to new records and achieve state-of-the-art results on two public benchmarks: BSDS500 and NYUDv2.


2018 ◽  
Vol 51 (1) ◽  
pp. 1-28 ◽  
Author(s):  
Zhijun Ding ◽  
Xiaolun Li ◽  
Changjun Jiang ◽  
Mengchu Zhou

Sign in / Sign up

Export Citation Format

Share Document