Time series characteristics and the widths of judgemental confidence intervals

1992 ◽  
Vol 7 (4) ◽  
pp. 413-420 ◽  
Author(s):  
Marcus O'Connor ◽  
Michael Lawrence
2020 ◽  
Vol 34 (10) ◽  
pp. 1487-1505
Author(s):  
Katja Polotzek ◽  
Holger Kantz

Abstract Correlations in models for daily precipitation are often generated by elaborate numerics that employ a high number of hidden parameters. We propose a parsimonious and parametric stochastic model for European mid-latitude daily precipitation amounts with focus on the influence of correlations on the statistics. Our method is meta-Gaussian by applying a truncated-Gaussian-power (tGp) transformation to a Gaussian ARFIMA model. The speciality of this approach is that ARFIMA(1, d, 0) processes provide synthetic time series with long- (LRC), meaning the sum of all autocorrelations is infinite, and short-range (SRC) correlations by only one parameter each. Our model requires the fit of only five parameters overall that have a clear interpretation. For model time series of finite length we deduce an effective sample size for the sample mean, whose variance is increased due to correlations. For example the statistical uncertainty of the mean daily amount of 103 years of daily records at the Fichtelberg mountain in Germany equals the one of about 14 years of independent daily data. Our effective sample size approach also yields theoretical confidence intervals for annual total amounts and allows for proper model validation in terms of the empirical mean and fluctuations of annual totals. We evaluate probability plots for the daily amounts, confidence intervals based on the effective sample size for the daily mean and annual totals, and the Mahalanobis distance for the annual maxima distribution. For reproducing annual maxima the way of fitting the marginal distribution is more crucial than the presence of correlations, which is the other way round for annual totals. Our alternative to rainfall simulation proves capable of modeling daily precipitation amounts as the statistics of a random selection of 20 data sets is well reproduced.


2012 ◽  
Vol 19 (5) ◽  
pp. 473-477
Author(s):  
A. Gluhovsky ◽  
T. Nielsen

Abstract. In atmospheric time series analysis, where only one record is typically available, subsampling (which works under the weakest assumptions among resampling methods), is especially useful. In particular, it yields large-sample confidence intervals of asymptotically correct coverage probability. Atmospheric records, however, are often not long enough, causing a substandard coverage of subsampling confidence intervals. In the paper, the subsampling methodology is extended to become more applicable in such practically important cases.


Author(s):  
Yoshiyuki Yabuuchi ◽  
◽  
Junzo Watada ◽  

Economic analyses are typical methods based on timeseries data or cross-section data. Economic systems are complex because they involve human behaviors and are affected by many factors. When a system includes such uncertainty, as those concerning human behaviors, a fuzzy system approach plays a pivotal role in such analysis. In this paper, we propose a fuzzy autocorrelation model with confidence intervals of fuzzy random timeseries data. These confidence intervals play an essential role in dealing with fuzzy random data on the fuzzy autocorrelation model that we have presented. We analyze tick-by-tick data of stock transactions and compare two time-series models, a fuzzy autocorrelation model proposed by us, and a new fuzzy time-series model that we propose in this paper.


2001 ◽  
Vol 17 (4) ◽  
pp. 623-633 ◽  
Author(s):  
Marcus O’Connor ◽  
William Remus ◽  
Kenneth Griggs

2021 ◽  
Author(s):  
Miriam Sieg ◽  
Lina Katrin Sciesielski ◽  
Karin Kirschner ◽  
Jochen Kruppa

Abstract Background: In longitudinal studies, observations are made over time. Hence, the single observations at each time point are dependent, making them a repeated measurement. In this work, we explore a different, counterintuitive setting: At each developmental time point, a lethal observation is performed on the pregnant or nursing mother. Therefore, the single time points are independent. Furthermore, the observation in the offspring at each time point is correlated with each other because each litter consists of several (genetically linked) littermates. In addition, the observed time series is short from a statistical perspective as animal ethics prevent killing more mother mice than absolutely necessary, and murine development is short anyway. We solve these challenges by using multiple contrast tests and visualizing the change point by the use of confidence intervals.Results: We used linear mixed models to model the variability of the mother. The estimates from the linear mixed model are then used in multiple contrast tests.There are a variety of contrasts and intuitively, we would use the Changepoint method. However, it does not deliver satisfying results. Interestingly, we found two other contrasts, both capable of answering different research questions in change point detection: i) Should a single point with change direction be found, or ii) Should the overall progression be determined? The Sequen contrast answers the first, the McDermott the second. Confidence intervals deliver effect estimates for the strength of the potential change point. Therefore, the scientist can define a biologically relevant limit of change depending on the research question.Conclusion: We present a solution with effect estimates for short independent time series with observations nested at a given time point. Multiple contrast tests produce confidence intervals, which allow determining the position of change points or to visualize the expression course over time. We suggest to use McDermott’s method to determine if there is an overall significant change within the time frame, while Sequen is better in determining specific change points. In addition, we offer a short formula for the estimation of the maximal length of the time series.


2003 ◽  
Vol 06 (02) ◽  
pp. 119-134 ◽  
Author(s):  
LUIS A. GIL-ALANA

In this article we propose the use of a version of the tests of Robinson [32] for testing unit and fractional roots in financial time series data. The tests have a standard null limit distribution and they are the most efficient ones in the context of Gaussian disturbances. We compute finite sample critical values based on non-Gaussian disturbances and the power properties of the tests are compared when using both, the asymptotic and the finite-sample (Gaussian and non-Gaussian) critical values. The tests are applied to the monthly structure of several stock market indexes and the results show that the if the underlying I(0) disturbances are white noise, the confidence intervals include the unit root; however, if they are autocorrelated, the unit root is rejected in favour of smaller degrees of integration. Using t-distributed critical values, the confidence intervals for the non-rejection values are generally narrower than with the asymptotic or than with the Gaussian finite-sample ones, suggesting that they may better describe the time series behaviour of the data examined.


Sign in / Sign up

Export Citation Format

Share Document