scholarly journals Monte Carlo profile confidence intervals for dynamic systems

2017 ◽  
Vol 14 (132) ◽  
pp. 20170126 ◽  
Author(s):  
E. L. Ionides ◽  
C. Breto ◽  
J. Park ◽  
R. A. Smith ◽  
A. A. King

Monte Carlo methods to evaluate and maximize the likelihood function enable the construction of confidence intervals and hypothesis tests, facilitating scientific investigation using models for which the likelihood function is intractable. When Monte Carlo error can be made small, by sufficiently exhaustive computation, then the standard theory and practice of likelihood-based inference applies. As datasets become larger, and models more complex, situations arise where no reasonable amount of computation can render Monte Carlo error negligible. We develop profile likelihood methodology to provide frequentist inferences that take into account Monte Carlo uncertainty. We investigate the role of this methodology in facilitating inference for computationally challenging dynamic latent variable models. We present examples arising in the study of infectious disease transmission, demonstrating our methodology for inference on nonlinear dynamic models using genetic sequence data and panel time-series data. We also discuss applicability to nonlinear time-series and spatio-temporal data.

2021 ◽  
Author(s):  
Klaus B. Beckmann ◽  
Lennart Reimer

This monograph generalises, and extends, the classic dynamic models in conflict analysis (Lanchester 1916, Richardson 1919, Boulding 1962). Restrictions on parameters are relaxed to account for alliances and for peacekeeping. Incrementalist as well as stochastic versions of the model are reviewed. These extensions allow for a rich variety of patterns of dynamic conflict. Using Monte Carlo techniques as well as time series analyses based on GDELT data (for the Ethiopian-Eritreian war, 1998–2000), we also assess the empirical usefulness of the model. It turns out that linear dynamic models capture selected phases of the conflict quite well, offering a potential taxonomy for conflict dynamics. We also discuss a method for introducing a modicum of (bounded) rationality into models from this tradition.


Mathematics ◽  
2020 ◽  
Vol 8 (7) ◽  
pp. 1078
Author(s):  
Ruxandra Stoean ◽  
Catalin Stoean ◽  
Miguel Atencia ◽  
Roberto Rodríguez-Labrada ◽  
Gonzalo Joya

Uncertainty quantification in deep learning models is especially important for the medical applications of this complex and successful type of neural architectures. One popular technique is Monte Carlo dropout that gives a sample output for a record, which can be measured statistically in terms of average probability and variance for each diagnostic class of the problem. The current paper puts forward a convolutional–long short-term memory network model with a Monte Carlo dropout layer for obtaining information regarding the model uncertainty for saccadic records of all patients. These are next used in assessing the uncertainty of the learning model at the higher level of sets of multiple records (i.e., registers) that are gathered for one patient case by the examining physician towards an accurate diagnosis. Means and standard deviations are additionally calculated for the Monte Carlo uncertainty estimates of groups of predictions. These serve as a new collection where a random forest model can perform both classification and ranking of variable importance. The approach is validated on a real-world problem of classifying electrooculography time series for an early detection of spinocerebellar ataxia 2 and reaches an accuracy of 88.59% in distinguishing between the three classes of patients.


2011 ◽  
Vol 19 (2) ◽  
pp. 188-204 ◽  
Author(s):  
Jong Hee Park

In this paper, I introduce changepoint models for binary and ordered time series data based on Chib's hidden Markov model. The extension of the changepoint model to a binary probit model is straightforward in a Bayesian setting. However, detecting parameter breaks from ordered regression models is difficult because ordered time series data often have clustering along the break points. To address this issue, I propose an estimation method that uses the linear regression likelihood function for the sampling of hidden states of the ordinal probit changepoint model. The marginal likelihood method is used to detect the number of hidden regimes. I evaluate the performance of the introduced methods using simulated data and apply the ordinal probit changepoint model to the study of Eichengreen, Watson, and Grossman on violations of the “rules of the game” of the gold standard by the Bank of England during the interwar period.


2020 ◽  
Vol 23 (4) ◽  
pp. 607-619 ◽  
Author(s):  
Matthew P. Adams ◽  
Scott A. Sisson ◽  
Kate J. Helmstedt ◽  
Christopher M. Baker ◽  
Matthew H. Holden ◽  
...  

1999 ◽  
Vol 87 (2) ◽  
pp. 530-537 ◽  
Author(s):  
Lynn J. Groome ◽  
Donna M. Mooney ◽  
Scherri B. Holland ◽  
Lisa A. Smith ◽  
Jana L. Atterbury ◽  
...  

Approximate entropy (ApEn) is a statistic that quantifies regularity in time series data, and this parameter has several features that make it attractive for analyzing physiological systems. In this study, ApEn was used to detect nonlinearities in the heart rate (HR) patterns of 12 low-risk human fetuses between 38 and 40 wk of gestation. The fetal cardiac electrical signal was sampled at a rate of 1,024 Hz by using Ag-AgCl electrodes positioned across the mother’s abdomen, and fetal R waves were extracted by using adaptive signal processing techniques. To test for nonlinearity, ApEn for the original HR time series was compared with ApEn for three dynamic models: temporally uncorrelated noise, linearly correlated noise, and linearly correlated noise with nonlinear distortion. Each model had the same mean and SD in HR as the original time series, and one model also preserved the Fourier power spectrum. We estimated that noise accounted for 17.2–44.5% of the total between-fetus variance in ApEn. Nevertheless, ApEn for the original time series data still differed significantly from ApEn for the three dynamic models for both group comparisons and individual fetuses. We concluded that the HR time series, in low-risk human fetuses, could not be modeled as temporally uncorrelated noise, linearly correlated noise, or static filtering of linearly correlated noise.


2003 ◽  
Vol 06 (02) ◽  
pp. 119-134 ◽  
Author(s):  
LUIS A. GIL-ALANA

In this article we propose the use of a version of the tests of Robinson [32] for testing unit and fractional roots in financial time series data. The tests have a standard null limit distribution and they are the most efficient ones in the context of Gaussian disturbances. We compute finite sample critical values based on non-Gaussian disturbances and the power properties of the tests are compared when using both, the asymptotic and the finite-sample (Gaussian and non-Gaussian) critical values. The tests are applied to the monthly structure of several stock market indexes and the results show that the if the underlying I(0) disturbances are white noise, the confidence intervals include the unit root; however, if they are autocorrelated, the unit root is rejected in favour of smaller degrees of integration. Using t-distributed critical values, the confidence intervals for the non-rejection values are generally narrower than with the asymptotic or than with the Gaussian finite-sample ones, suggesting that they may better describe the time series behaviour of the data examined.


Sign in / Sign up

Export Citation Format

Share Document