Parameterization of Continuous Time Autoregressive Models for Irregularly Sampled Time Series Data

Author(s):  
J. Belcher ◽  
J. S. Hampton ◽  
G. Tunnicliffe Wilson
2008 ◽  
Vol 5 (25) ◽  
pp. 885-897 ◽  
Author(s):  
Simon Cauchemez ◽  
Neil M Ferguson

We present a new statistical approach to analyse epidemic time-series data. A major difficulty for inference is that (i) the latent transmission process is partially observed and (ii) observed quantities are further aggregated temporally. We develop a data augmentation strategy to tackle these problems and introduce a diffusion process that mimicks the susceptible–infectious–removed (SIR) epidemic process, but that is more tractable analytically. While methods based on discrete-time models require epidemic and data collection processes to have similar time scales, our approach, based on a continuous-time model, is free of such constraint. Using simulated data, we found that all parameters of the SIR model, including the generation time, were estimated accurately if the observation interval was less than 2.5 times the generation time of the disease. Previous discrete-time TSIR models have been unable to estimate generation times, given that they assume the generation time is equal to the observation interval. However, we were unable to estimate the generation time of measles accurately from historical data. This indicates that simple models assuming homogenous mixing (even with age structure) of the type which are standard in mathematical epidemiology miss key features of epidemics in large populations.


ScienceRise ◽  
2021 ◽  
pp. 12-20
Author(s):  
Andrii Belas ◽  
Petro Bidyuk

The object of research. The object of research is modeling and forecasting nonlinear nonstationary processes presented in the form of time-series data. Investigated problem. There are several popular approaches to solving the problems of adequate model constructing and forecasting nonlinear nonstationary processes, such as autoregressive models and recurrent neural networks. However, each of them has its advantages and drawbacks. Autoregressive models cannot deal with the nonlinear or combined influence of previous states or external factors. Recurrent neural networks are computationally expensive and cannot work with sequences of high length or frequency. The main scientific result. The model for forecasting nonlinear nonstationary processes presented in the form of the time series data was built using convolutional neural networks. The current study shows results in which convolutional networks are superior to recurrent ones in terms of both accuracy and complexity. It was possible to build a more accurate model with a much fewer number of parameters. It indicates that one-dimensional convolutional neural networks can be a quite reasonable choice for solving time series forecasting problems. The area of practical use of the research results. Forecasting dynamics of processes in economy, finances, ecology, healthcare, technical systems and other areas exhibiting the types of nonlinear nonstationary processes. Innovative technological product. Methodology of using convolutional neural networks for modeling and forecasting nonlinear nonstationary processes presented in the form of time-series data. Scope of the innovative technological product. Nonlinear nonstationary processes presented in the form of time-series data.


2018 ◽  
Vol 206 (2) ◽  
pp. 414-446 ◽  
Author(s):  
Yuying Sun ◽  
Ai Han ◽  
Yongmiao Hong ◽  
Shouyang Wang

Water ◽  
2019 ◽  
Vol 11 (5) ◽  
pp. 1098 ◽  
Author(s):  
Benjamin D. Bowes ◽  
Jeffrey M. Sadler ◽  
Mohamed M. Morsy ◽  
Madhur Behl ◽  
Jonathan L. Goodall

Many coastal cities are facing frequent flooding from storm events that are made worse by sea level rise and climate change. The groundwater table level in these low relief coastal cities is an important, but often overlooked, factor in the recurrent flooding these locations face. Infiltration of stormwater and water intrusion due to tidal forcing can cause already shallow groundwater tables to quickly rise toward the land surface. This decreases available storage which increases runoff, stormwater system loads, and flooding. Groundwater table forecasts, which could help inform the modeling and management of coastal flooding, are generally unavailable. This study explores two machine learning models, Long Short-term Memory (LSTM) networks and Recurrent Neural Networks (RNN), to model and forecast groundwater table response to storm events in the flood prone coastal city of Norfolk, Virginia. To determine the effect of training data type on model accuracy, two types of datasets (i) the continuous time series and (ii) a dataset of only storm events, created from observed groundwater table, rainfall, and sea level data from 2010–2018 are used to train and test the models. Additionally, a real-time groundwater table forecasting scenario was carried out to compare the models’ abilities to predict groundwater table levels given forecast rainfall and sea level as input data. When modeling the groundwater table with observed data, LSTM networks were found to have more predictive skill than RNNs (root mean squared error (RMSE) of 0.09 m versus 0.14 m, respectively). The real-time forecast scenario showed that models trained only on storm event data outperformed models trained on the continuous time series data (RMSE of 0.07 m versus 0.66 m, respectively) and that LSTM outperformed RNN models. Because models trained with the continuous time series data had much higher RMSE values, they were not suitable for predicting the groundwater table in the real-time scenario when using forecast input data. These results demonstrate the first use of LSTM networks to create hourly forecasts of groundwater table in a coastal city and show they are well suited for creating operational forecasts in real-time. As groundwater table levels increase due to sea level rise, forecasts of groundwater table will become an increasingly valuable part of coastal flood modeling and management.


Bernoulli ◽  
2008 ◽  
Vol 14 (2) ◽  
pp. 519-542 ◽  
Author(s):  
Ross A. Maller ◽  
Gernot Müller ◽  
Alex Szimayer

2014 ◽  
Vol 24 (05) ◽  
pp. 1450063 ◽  
Author(s):  
J. S. Armand Eyebe Fouda ◽  
Bertrand Bodo ◽  
Samrat L. Sabat ◽  
J. Yves Effa

The use of binary 0-1 test for chaos detection is limited to detect chaos in oversampled time series observations. In this paper we propose a modified 0-1 test in which, binary 0-1 test is applied to the discrete map of local maxima and minima of the original observable in contrast to the direct observable. The proposed approach successfully detects chaos in oversampled time series data. This is verified by simulating different numerical simulations of Lorenz and Duffing systems. The simulation results show the efficiency and computational gain of the proposed test for chaos detection in the continuous time dynamical systems.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
David Moriña ◽  
Amanda Fernández-Fontelo ◽  
Alejandra Cabaña ◽  
Pedro Puig

AbstractThe main goal of this work is to present a new model able to deal with potentially misreported continuous time series. The proposed model is able to handle the autocorrelation structure in continuous time series data, which might be partially or totally underreported or overreported. Its performance is illustrated through a comprehensive simulation study considering several autocorrelation structures and three real data applications on human papillomavirus incidence in Girona (Catalonia, Spain) and Covid-19 incidence in two regions with very different circumstances: the early days of the epidemic in the Chinese region of Heilongjiang and the most current data from Catalonia.


2021 ◽  
Vol 3 (1) ◽  
pp. 11
Author(s):  
Christopher G. Albert ◽  
Ulrich Callies ◽  
Udo von Toussaint

We present an approach to enhance the performance and flexibility of the Bayesian inference of model parameters based on observations of the measured data. Going beyond the usual surrogate-enhanced Monte-Carlo or optimization methods that focus on a scalar loss, we place emphasis on a function-valued output of a formally infinite dimension. For this purpose, the surrogate models are built on a combination of linear dimensionality reduction in an adaptive basis of principal components and Gaussian process regression for the map between reduced feature spaces. Since the decoded surrogate provides the full model output rather than only the loss, it is re-usable for multiple calibration measurements as well as different loss metrics and, consequently, allows for flexible marginalization over such quantities and applications to Bayesian hierarchical models. We evaluate the method’s performance based on a case study of a toy model and a simple riverine diatom model for the Elbe river. As input data, this model uses six tunable scalar parameters as well as silica concentrations in the upper reach of the river together with the continuous time-series of temperature, radiation, and river discharge over a specific year. The output consists of continuous time-series data that are calibrated against corresponding measurements from the Geesthacht Weir station at the Elbe river. For this study, only two scalar inputs were considered together with a function-valued output and compared to an existing model calibration using direct simulation runs without a surrogate.


2014 ◽  
Vol 644-650 ◽  
pp. 2164-2168
Author(s):  
Yong Zhi Liu ◽  
Xue Ping Jia

Association rules has played a significant role in mining classification clear affairs, but the performance is poor for the continuous time series data . Firstly, this paper presents the trend of time series, including the rise, decline and steady trend, and the time series trend method is proposed; Secondly, define the trend of association rules, including the trend of association rules’ support degree, trend of association rule’s confidence; Finally, gives an application example, show the effectiveness of the method in classification and association analysis of time series.


Sign in / Sign up

Export Citation Format

Share Document