scholarly journals STAR: A Concise Deep Learning Framework for Citywide Human Mobility Prediction

Author(s):  
Hongnian Wang ◽  
Han Su
2021 ◽  
Vol 12 (6) ◽  
pp. 1-23
Author(s):  
Shuo Tao ◽  
Jingang Jiang ◽  
Defu Lian ◽  
Kai Zheng ◽  
Enhong Chen

Mobility prediction plays an important role in a wide range of location-based applications and services. However, there are three problems in the existing literature: (1) explicit high-order interactions of spatio-temporal features are not systemically modeled; (2) most existing algorithms place attention mechanisms on top of recurrent network, so they can not allow for full parallelism and are inferior to self-attention for capturing long-range dependence; (3) most literature does not make good use of long-term historical information and do not effectively model the long-term periodicity of users. To this end, we propose MoveNet and RLMoveNet. MoveNet is a self-attention-based sequential model, predicting each user’s next destination based on her most recent visits and historical trajectory. MoveNet first introduces a cross-based learning framework for modeling feature interactions. With self-attention on both the most recent visits and historical trajectory, MoveNet can use an attention mechanism to capture the user’s long-term regularity in a more efficient way. Based on MoveNet, to model long-term periodicity more effectively, we add the reinforcement learning layer and named RLMoveNet. RLMoveNet regards the human mobility prediction as a reinforcement learning problem, using the reinforcement learning layer as the regularization part to drive the model to pay attention to the behavior with periodic actions, which can help us make the algorithm more effective. We evaluate both of them with three real-world mobility datasets. MoveNet outperforms the state-of-the-art mobility predictor by around 10% in terms of accuracy, and simultaneously achieves faster convergence and over 4x training speedup. Moreover, RLMoveNet achieves higher prediction accuracy than MoveNet, which proves that modeling periodicity explicitly from the perspective of reinforcement learning is more effective.


2020 ◽  
Author(s):  
Mohamed Aziz Bhouri ◽  
Francisco Sahli Costabal ◽  
Hanwen Wang ◽  
Kevin Linka ◽  
Mathias Peirlinck ◽  
...  

This paper presents a deep learning framework for epidemiology system identification from noisy and sparse observations with quantified uncertainty. The proposed approach employs an ensemble of deep neural networks to infer the time-dependent reproduction number of an infectious disease by formulating a tensor-based multi-step loss function that allows us to efficiently calibrate the model on multiple observed trajectories. The method is applied to a mobility and social behavior-based SEIR model of COVID-19 spread. The model is trained on Google and Unacast mobility data spanning a period of 66 days, and is able to yield accurate future forecasts of COVID-19 spread in 203 US counties within a time-window of 15 days. Strikingly, a sensitivity analysis that assesses the importance of different mobility and social behavior parameters reveals that attendance of close places, including workplaces, residential, and retail and recreational locations, has the largest impact on the basic reproduction number. The model enables us to rapidly probe and quantify the effects of government interventions, such as lock-down and re-opening strategies. Taken together, the proposed framework provides a robust workflow for data-driven epidemiology model discovery under uncertainty and produces probabilistic forecasts for the evolution of a pandemic that can judiciously inform policy and decision making. All codes and data accompanying this manuscript are available at https://github.com/PredictiveIntelligenceLab/DeepCOVID19.


Author(s):  
S. Miyazawa ◽  
X. Song ◽  
R. Jiang ◽  
Z. Fan ◽  
R. Shibasaki ◽  
...  

Abstract. Human mobility analysis on large-scale mobility data has contributed to multiple applications such as urban and transportation planning, disaster preparation and response, tourism, and public health. However, when some unusual events happen, every individual behaves differently depending on their personal routine and background information. To improve the accuracy of the crowd behavior prediction model, understanding supplemental spatiotemporal topics, such as when, where and what people observe and are interested in, is important. In this research, we develop a model integrating social network service (SNS) data into the human mobility prediction model as background information of the mobility. We employ multi-modal deep learning models using Long short-term memory (LSTM) architecture to incorporate SNS data to a human mobility prediction model based on Global Navigation Satellite System (GNSS) data. We process anonymized interpolated GNSS trajectories from mobile phones into mobility sequence with discretized grid IDs, and apply several topic modeling methods on geo-tagged data to extract spatiotemporal topic features in each spatiotemporal unit similar to the mobility data. Thereafter, we integrate the two datasets in the multi-modal deep learning prediction models to predict city-scale mobility. The experiment proves that the models with SNS topics performed better than baseline models.


2020 ◽  
Author(s):  
Raniyaharini R ◽  
Madhumitha K ◽  
Mishaa S ◽  
Virajaravi R

2020 ◽  
Author(s):  
Jinseok Lee

BACKGROUND The coronavirus disease (COVID-19) has explosively spread worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) can be used as a relevant screening tool owing to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely busy fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE We aimed to quickly develop an AI technique to diagnose COVID-19 pneumonia and differentiate it from non-COVID pneumonia and non-pneumonia diseases on CT. METHODS A simple 2D deep learning framework, named fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning, using one of the four state-of-art pre-trained deep learning models (VGG16, ResNet50, InceptionV3, or Xception) as a backbone. For training and testing of FCONet, we collected 3,993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and non-pneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training and a testing set at a ratio of 8:2. For the test dataset, the diagnostic performance to diagnose COVID-19 pneumonia was compared among the four pre-trained FCONet models. In addition, we tested the FCONet models on an additional external testing dataset extracted from the embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS Of the four pre-trained models of FCONet, the ResNet50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100%, and accuracy 99.87%) and outperformed the other three pre-trained models in testing dataset. In additional external test dataset using low-quality CT images, the detection accuracy of the ResNet50 model was the highest (96.97%), followed by Xception, InceptionV3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS The FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing dataset, the ResNet50-based FCONet might be the best model, as it outperformed other FCONet models based on VGG16, Xception, and InceptionV3.


Sign in / Sign up

Export Citation Format

Share Document