scholarly journals A Survey on Deep Learning for Human Mobility

2023 ◽  
Vol 55 (1) ◽  
pp. 1-44
Author(s):  
Massimiliano Luca ◽  
Gianni Barlacchi ◽  
Bruno Lepri ◽  
Luca Pappalardo

The study of human mobility is crucial due to its impact on several aspects of our society, such as disease spreading, urban planning, well-being, pollution, and more. The proliferation of digital mobility data, such as phone records, GPS traces, and social media posts, combined with the predictive power of artificial intelligence, triggered the application of deep learning to human mobility. Existing surveys focus on single tasks, data sources, mechanistic or traditional machine learning approaches, while a comprehensive description of deep learning solutions is missing. This survey provides a taxonomy of mobility tasks, a discussion on the challenges related to each task and how deep learning may overcome the limitations of traditional models, a description of the most relevant solutions to the mobility tasks described above, and the relevant challenges for the future. Our survey is a guide to the leading deep learning solutions to next-location prediction, crowd flow prediction, trajectory generation, and flow generation. At the same time, it helps deep learning scientists and practitioners understand the fundamental concepts and the open challenges of the study of human mobility.

2020 ◽  
Author(s):  
Mohamed Aziz Bhouri ◽  
Francisco Sahli Costabal ◽  
Hanwen Wang ◽  
Kevin Linka ◽  
Mathias Peirlinck ◽  
...  

This paper presents a deep learning framework for epidemiology system identification from noisy and sparse observations with quantified uncertainty. The proposed approach employs an ensemble of deep neural networks to infer the time-dependent reproduction number of an infectious disease by formulating a tensor-based multi-step loss function that allows us to efficiently calibrate the model on multiple observed trajectories. The method is applied to a mobility and social behavior-based SEIR model of COVID-19 spread. The model is trained on Google and Unacast mobility data spanning a period of 66 days, and is able to yield accurate future forecasts of COVID-19 spread in 203 US counties within a time-window of 15 days. Strikingly, a sensitivity analysis that assesses the importance of different mobility and social behavior parameters reveals that attendance of close places, including workplaces, residential, and retail and recreational locations, has the largest impact on the basic reproduction number. The model enables us to rapidly probe and quantify the effects of government interventions, such as lock-down and re-opening strategies. Taken together, the proposed framework provides a robust workflow for data-driven epidemiology model discovery under uncertainty and produces probabilistic forecasts for the evolution of a pandemic that can judiciously inform policy and decision making. All codes and data accompanying this manuscript are available at https://github.com/PredictiveIntelligenceLab/DeepCOVID19.


2019 ◽  
Vol 37 (15_suppl) ◽  
pp. e14592-e14592
Author(s):  
Junshui Ma ◽  
Rongjie Liu ◽  
Gregory V. Goldmacher ◽  
Richard Baumgartner

e14592 Background: Radiomic features derived from CT scans have shown promise in predicting treatment response (Sun et al 2018, and others). We carried out a proof-of-concept study to investigate the use of CT images to predict lesion-level response. Methods: CT images from Merck studies KEYNOTE-010 (NCT01905657) and KEYNOTE-024 (NCT02142738), were used. Data from each study were evaluated separately and split for training (80%) and validation (20%) in each study. A lesion was classified as “shrinking” if ≥30% size reduction from baseline was seen on any future scan. There were 2004 (613 shrinking vs. 1391 non-shrinking) and 588 (311 vs. 277) lesions in KN10 and KN24, respectively. 130 radiomic features were extracted, followed by random forest to predict lesion response. In addition, end-to-end deep learning was used, which predicts the response directly from ROIs of CT images. Models were trained in two ways: (1) using pre-treatment baseline (BL) only or (2) using both BL and the first post-treatment image (V1) as predictors. Finally, to evaluate the predictive power without relying on initial lesion size, size information was omitted from CT images. Results: Results from the KN10 and KN24 are summarized in Table. Conclusions: The results suggest that the BL CT images alone have little power to predict lesion response, while BL and the first post-baseline image exhibit high predictive power. Although a substantial part of the predictive power can be attributed to change in ROI size, the predictive power does exist in other aspects of CT images. Overall, the radiomic signature followed by random forest produced predictions similar to, if not better than, the deep learning approach. [Table: see text]


Author(s):  
Amin Vahedian ◽  
Xun Zhou ◽  
Ling Tong ◽  
W. Nick Street ◽  
Yanhua Li

Urban dispersal events are processes where an unusually large number of people leave the same area in a short period. Early prediction of dispersal events is important in mitigating congestion and safety risks and making better dispatching decisions for taxi and ride-sharing fleets. Existing work mostly focuses on predicting taxi demand in the near future by learning patterns from historical data. However, they fail in case of abnormality because dispersal events with abnormally high demand are non-repetitive and violate common assumptions such as smoothness in demand change over time. Instead, in this paper we argue that dispersal events follow a complex pattern of trips and other related features in the past, which can be used to predict such events. Therefore, we formulate the dispersal event prediction problem as a survival analysis problem. We propose a two-stage framework (DILSA), where a deep learning model combined with survival analysis is developed to predict the probability of a dispersal event and its demand volume. We conduct extensive case studies and experiments on the NYC Yellow taxi dataset from 20142016. Results show that DILSA can predict events in the next 5 hours with F1-score of 0:7 and with average time error of 18 minutes. It is orders of magnitude better than the state-of-the-art deep learning approaches for taxi demand prediction.


Author(s):  
Xiaoyan Mu ◽  
Anthony Gar-On Yeh ◽  
Xiaohu Zhang

The rapid spread of infectious diseases is devastating to the healthcare systems of all countries. The dynamics of the spatial spread of epidemic have received considerable scientific attention. However, the understanding of the spatial variation of epidemic severity in the urban system is lagging. Using synchronized epidemic data and human mobility data, integrated with other multiple-sourced data, this study examines the interplay between disease spread of coronavirus disease (COVID-19) and inter-city and intra-city mobility among 319 Chinese cities. The results show a disease spreading process consisting of a major transfer (inter-city) diffusion before the Chinese New Year and a subsequent local (intra-city) diffusion after the Chinese New Year in the urban system of China. The variations in disease incidence between cities are mainly driven by inter-city mobility from Wuhan, the epidemic center of COVID-19. Cities that are closer to the epidemic center and with more population in the urban area will face higher risks of disease incidence. Warm and humid weather could help mitigate the spread of COVID-19. The extensive inter-city and intra-city travel interventions in China have reduced approximately 70% and 40% inter-city and intra-city mobility, respectively, and effectively slowed down the spread of the disease by minimizing human to human transmission together with other disease monitoring, control, and preventive measures. These findings could provide valuable insights into understanding the dynamics of disease spread in the urban system and help to respond to another new wave of pandemic in China and other parts of the world.


2019 ◽  
Vol 22 (63) ◽  
pp. 81-100 ◽  
Author(s):  
Antonela Tommasel ◽  
Juan Manuel Rodriguez ◽  
Daniela Godoy

With the widespread of modern technologies and social media networks, a new form of bullying occurring anytime and anywhere has emerged. This new phenomenon, known as cyberaggression or cyberbullying, refers to aggressive and intentional acts aiming at repeatedly causing harm to other person involving rude, insulting, offensive, teasing or demoralising comments through online social media. As these aggressions represent a threatening experience to Internet users, especially kids and teens who are still shaping their identities, social relations and well-being, it is crucial to understand how cyberbullying occurs to prevent it from escalating. Considering the massive information on the Web, the developing of intelligent techniques for automatically detecting harmful content is gaining importance, allowing the monitoring of large-scale social media and the early detection of unwanted and aggressive situations. Even though several approaches have been developed over the last few years based both on traditional and deep learning techniques, several concerns arise over the duplication of research and the difficulty of comparing results. Moreover, there is no agreement regarding neither which type of technique is better suited for the task, nor the type of features in which learning should be based. The goal of this work is to shed some light on the effects of learning paradigms and feature engineering approaches for detecting aggressions in social media texts. In this context, this work provides an evaluation of diverse traditional and deep learning techniques based on diverse sets of features, across multiple social media sites. 


PLoS ONE ◽  
2021 ◽  
Vol 16 (2) ◽  
pp. e0242946
Author(s):  
Ahsan Noor Khan ◽  
Achintha Avin Ihalage ◽  
Yihan Ma ◽  
Baiyang Liu ◽  
Yujie Liu ◽  
...  

Emotion states recognition using wireless signals is an emerging area of research that has an impact on neuroscientific studies of human behaviour and well-being monitoring. Currently, standoff emotion detection is mostly reliant on the analysis of facial expressions and/or eye movements acquired from optical or video cameras. Meanwhile, although they have been widely accepted for recognizing human emotions from the multimodal data, machine learning approaches have been mostly restricted to subject dependent analyses which lack of generality. In this paper, we report an experimental study which collects heartbeat and breathing signals of 15 participants from radio frequency (RF) reflections off the body followed by novel noise filtering techniques. We propose a novel deep neural network (DNN) architecture based on the fusion of raw RF data and the processed RF signal for classifying and visualising various emotion states. The proposed model achieves high classification accuracy of 71.67% for independent subjects with 0.71, 0.72 and 0.71 precision, recall and F1-score values respectively. We have compared our results with those obtained from five different classical ML algorithms and it is established that deep learning offers a superior performance even with limited amount of raw RF and post processed time-sequence data. The deep learning model has also been validated by comparing our results with those from ECG signals. Our results indicate that using wireless signals for stand-by emotion state detection is a better alternative to other technologies with high accuracy and have much wider applications in future studies of behavioural sciences.


Author(s):  
S. Miyazawa ◽  
X. Song ◽  
R. Jiang ◽  
Z. Fan ◽  
R. Shibasaki ◽  
...  

Abstract. Human mobility analysis on large-scale mobility data has contributed to multiple applications such as urban and transportation planning, disaster preparation and response, tourism, and public health. However, when some unusual events happen, every individual behaves differently depending on their personal routine and background information. To improve the accuracy of the crowd behavior prediction model, understanding supplemental spatiotemporal topics, such as when, where and what people observe and are interested in, is important. In this research, we develop a model integrating social network service (SNS) data into the human mobility prediction model as background information of the mobility. We employ multi-modal deep learning models using Long short-term memory (LSTM) architecture to incorporate SNS data to a human mobility prediction model based on Global Navigation Satellite System (GNSS) data. We process anonymized interpolated GNSS trajectories from mobile phones into mobility sequence with discretized grid IDs, and apply several topic modeling methods on geo-tagged data to extract spatiotemporal topic features in each spatiotemporal unit similar to the mobility data. Thereafter, we integrate the two datasets in the multi-modal deep learning prediction models to predict city-scale mobility. The experiment proves that the models with SNS topics performed better than baseline models.


Computers ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 3
Author(s):  
Ghassan F. Bati ◽  
Vivek K. Singh

Interpersonal trust mediates multiple socio-technical systems and has implications for personal and societal well-being. Consequently, it is crucial to devise novel machine learning methods to infer interpersonal trust automatically using mobile sensor-based behavioral data. Considering that social relationships are often affected by neighboring relationships within the same network, this work proposes using a novel neighbor-aware deep learning architecture (NADAL) to enhance the inference of interpersonal trust scores. Based on analysis of call, SMS, and Bluetooth interaction data from a one-year field study involving 130 participants, we report that: (1) adding information about neighboring relationships improves trust score prediction in both shallow and deep learning approaches; and (2) a custom-designed neighbor-aware deep learning architecture outperforms a baseline feature concatenation based deep learning approach. The results obtained at interpersonal trust prediction are promising and have multiple implications for trust-aware applications in the emerging social internet of things.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


Sign in / Sign up

Export Citation Format

Share Document