A Multi-Interval Method for Discretizing Continuous-Time Event Sequences

Author(s):  
Austin D. Lewis ◽  
Katrina M. Groth
2021 ◽  
Author(s):  
Vinayak Gupta ◽  
Srikanta Bedathur

A large fraction of data generated via human activities such as online purchases, health records, spatial mobility etc. can be represented as continuous-time event sequences (CTES) i.e. sequences of discrete events over a continuous time. Learning neural models over CTES is a non-trivial task as it involves modeling the ever-increasing event timestamps, inter-event time gaps, event types, and the influences between different events within and across different sequences. Moreover, existing sequence modeling techniques consider a complete observation scenario i.e. the event sequence being modeled is completely observed with no missing events – an ideal setting that is rarely applicable in real-world applications. In this paper, we highlight our approach[8] for modeling CTES with intermittent observations. Buoyed by the recent success of neural marked temporal point processes (MTPP) for modeling the generative distribution of CTES, we provide a novel unsupervised model and inference method for learning MTPP in presence of event sequences with missing events. Specifically, we first model the generative processes of observed events and missing events using two MTPP, where the missing events are represented as latent random variables. Then, we devise an unsupervised training method that jointly learns both the MTPP using variational inference. Experiments across real-world datasets show that our modeling framework outperforms state-of-the-art techniques for future event prediction and imputation. This work appeared in AISTATS 2021.


Author(s):  
Oleksandr Shchur ◽  
Ali Caner Türkmen ◽  
Tim Januschowski ◽  
Stephan Günnemann

Temporal point processes (TPP) are probabilistic generative models for continuous-time event sequences. Neural TPPs combine the fundamental ideas from point process literature with deep learning approaches, thus enabling construction of flexible and efficient models. The topic of neural TPPs has attracted significant attention in the recent years, leading to the development of numerous new architectures and applications for this class of models. In this review paper we aim to consolidate the existing body of knowledge on neural TPPs. Specifically, we focus on important design choices and general principles for defining neural TPP models. Next, we provide an overview of application areas commonly considered in the literature. We conclude this survey with the list of open challenges and important directions for future work in the field of neural TPPs.


2007 ◽  
Vol 44 (02) ◽  
pp. 285-294 ◽  
Author(s):  
Qihe Tang

We study the tail behavior of discounted aggregate claims in a continuous-time renewal model. For the case of Pareto-type claims, we establish a tail asymptotic formula, which holds uniformly in time.


2018 ◽  
Vol 23 (4) ◽  
pp. 774-799 ◽  
Author(s):  
Charles C. Driver ◽  
Manuel C. Voelkle

IEE Review ◽  
1991 ◽  
Vol 37 (6) ◽  
pp. 228
Author(s):  
Stephen Barnett

Sign in / Sign up

Export Citation Format

Share Document