scholarly journals Finite-sample properties of estimators for first and second order autoregressive processes

Author(s):  
Sigrunn H. Sørbye ◽  
Pedro G. Nicolau ◽  
Håvard Rue

AbstractThe class of autoregressive (AR) processes is extensively used to model temporal dependence in observed time series. Such models are easily available and routinely fitted using freely available statistical software like . A potential problem is that commonly applied estimators for the coefficients of AR processes are severely biased when the time series are short. This paper studies the finite-sample properties of well-known estimators for the coefficients of stationary AR(1) and AR(2) processes and provides bias-corrected versions of these estimators which are quick and easy to apply. The new estimators are constructed by modeling the relationship between the true and originally estimated AR coefficients using weighted orthogonal polynomial regression, taking the sampling distribution of the original estimators into account. The finite-sample distributions of the new bias-corrected estimators are approximated using transformations of skew-normal densities, combined with a Gaussian copula approximation in the AR(2) case. The properties of the new estimators are demonstrated by simulations and in the analysis of a real ecological data set. The estimators are easily available in our accompanying -package for AR(1) and AR(2) processes of length 10–50, both giving bias-corrected coefficient estimates and corresponding confidence intervals.

2021 ◽  
Vol 99 (Supplement_1) ◽  
pp. 218-219
Author(s):  
Andres Fernando T Russi ◽  
Mike D Tokach ◽  
Jason C Woodworth ◽  
Joel M DeRouchey ◽  
Robert D Goodband ◽  
...  

Abstract The swine industry has been constantly evolving to select animals with improved performance traits and to minimize variation in body weight (BW) in order to meet packer specifications. Therefore, understanding variation presents an opportunity for producers to find strategies that could help reduce, manage, or deal with variation of pigs in a barn. A systematic review and meta-analysis was conducted by collecting data from multiple studies and available data sets in order to develop prediction equations for coefficient of variation (CV) and standard deviation (SD) as a function of BW. Information regarding BW variation from 16 papers was recorded to provide approximately 204 data points. Together, these data included 117,268 individually weighed pigs with a sample size that ranged from 104 to 4,108 pigs. A random-effects model with study used as a random effect was developed. Observations were weighted using sample size as an estimate for precision on the analysis, where larger data sets accounted for increased accuracy in the model. Regression equations were developed using the nlme package of R to determine the relationship between BW and its variation. Polynomial regression analysis was conducted separately for each variation measurement. When CV was reported in the data set, SD was calculated and vice versa. The resulting prediction equations were: CV (%) = 20.04 – 0.135 × (BW) + 0.00043 × (BW)2, R2=0.79; SD = 0.41 + 0.150 × (BW) - 0.00041 × (BW)2, R2 = 0.95. These equations suggest that there is evidence for a decreasing quadratic relationship between mean CV of a population and BW of pigs whereby the rate of decrease is smaller as mean pig BW increases from birth to market. Conversely, the rate of increase of SD of a population of pigs is smaller as mean pig BW increases from birth to market.


2018 ◽  
Vol 2 (3) ◽  
pp. 224-228
Author(s):  
Batol Shiwa Hashimi ◽  
Aissa Boudjella ◽  
Wagma Saboor

The purpose of this investigation is to examine the variation of temperature in Japan over the past 114 years. The historical dataset of the monthly average temperature from 1901 to 2015 were analyzed. The relationship between temperature and time during the four time intervals, i.e (1901 -1930), (1931-1960), (1961-1990) and (1991-2015) is described using a new analytical model based on the last –square method of estimation. We accurately fit a polynomial regression trend of degree 4 to the time series to describe the temperature variation. The results show the average difference of temperature between 2015 and 1901 increases about 0.97 °C. The average monthly difference between the maximum and minimum temperature was approximately 2.11 °C. This approach of modeling temperature using regression form significantly simplifies the data analysis. The information from data, namely the variation of the temperature, maybe be obtained from the extracted parameters such as slope, y-intercept, and the coefficients of polynomial function that are a function of time. More importantly, the parameters that describe the time variation temperature trends over 115 years obtained with a high R-squared do not vary significantly. This is in agreement with the Earth’s average temperature that has climbed to more 1 oC.


2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Xiaoping Chen

This paper proposes a new and important class of mean residual life regression model, which is called the mean residual life transformation model.  The link function is assumed to be unknown and increasing in its second argument, but it is permitted to be not differentiable. The mean residual life transformation model encompasses the proportional mean residual life model, the additive mean residual life model, and so on. Under maximum rank correlation estimation, we present the estimation procedures, whose asymptotic and finite sample properties are established. The consistent variance can be estimated by a resampling method via perturbing the U -statistics objective function repeatedly which avoids the usual sandwich choice. Monte Carlo simulations reveal good finite sample performance and the estimators are illustrated with the Oscar data set.


2021 ◽  
Vol 2021 (026) ◽  
pp. 1-52
Author(s):  
Dong Hwan Oh ◽  
◽  
Andrew J. Patton ◽  

This paper proposes a dynamic multi-factor copula for use in high dimensional time series applications. A novel feature of our model is that the assignment of individual variables to groups is estimated from the data, rather than being pre-assigned using SIC industry codes, market capitalization ranks, or other ad hoc methods. We adapt the k-means clustering algorithm for use in our application and show that it has excellent finite-sample properties. Applying the new model to returns on 110 US equities, we find around 20 clusters to be optimal. In out-of-sample forecasts, we find that a model with as few as five estimated clusters significantly outperforms an otherwise identical model with 21 clusters formed using two-digit SIC codes.


2019 ◽  
Vol 36 (4) ◽  
pp. 751-772 ◽  
Author(s):  
Javier Hualde ◽  
Morten Ørregaard Nielsen

We consider truncated (or conditional) sum of squares estimation of a parametric model composed of a fractional time series and an additive generalized polynomial trend. Both the memory parameter, which characterizes the behavior of the stochastic component of the model, and the exponent parameter, which drives the shape of the deterministic component, are considered not only unknown real numbers but also lying in arbitrarily large (but finite) intervals. Thus, our model captures different forms of nonstationarity and noninvertibility. As in related settings, the proof of consistency (which is a prerequisite for proving asymptotic normality) is challenging due to nonuniform convergence of the objective function over a large admissible parameter space, but, in addition, our framework is substantially more involved due to the competition between stochastic and deterministic components. We establish consistency and asymptotic normality under quite general circumstances, finding that results differ crucially depending on the relative strength of the deterministic and stochastic components. Finite-sample properties are illustrated by means of a Monte Carlo experiment.


2013 ◽  
Vol 29 (5) ◽  
pp. 1009-1056 ◽  
Author(s):  
Frédéric Lavancier ◽  
Remigijus Leipus ◽  
Anne Philippe ◽  
Donatas Surgailis

This article deals with detection of a nonconstant long memory parameter in time series. The null hypothesis presumes stationary or nonstationary time series with a constant long memory parameter, typically an I (d) series with d > −.5 . The alternative corresponds to an increase in persistence and includes in particular an abrupt or gradual change from I (d1) to I (d2), −.5 < d1 < d2. We discuss several test statistics based on the ratio of forward and backward sample variances of the partial sums. The consistency of the tests is proved under a very general setting. We also study the behavior of these test statistics for some models with a changing memory parameter. A simulation study shows that our testing procedures have good finite sample properties and turn out to be more powerful than the KPSS-based tests (see Kwiatkowski, Phillips, Schmidt and Shin, 1992) considered in some previous works.


2018 ◽  
Vol 33 (1) ◽  
pp. 31-43
Author(s):  
Bol A. M. Atem ◽  
Suleman Nasiru ◽  
Kwara Nantomah

Abstract This article studies the properties of the Topp–Leone linear exponential distribution. The parameters of the new model are estimated using maximum likelihood estimation, and simulation studies are performed to examine the finite sample properties of the parameters. An application of the model is demonstrated using a real data set. Finally, a bivariate extension of the model is proposed.


2001 ◽  
Vol 17 (1) ◽  
pp. 156-187 ◽  
Author(s):  
Atsushi Inoue

This paper proposes nonparametric tests of change in the distribution function of a time series. The limiting null distributions of the test statistics depend on a nuisance parameter, and critical values cannot be tabulated a priori. To circumvent this problem, a new simulation-based statistical method is developed. The validity of our simulation procedure is established in terms of size, local power, and test consistency. The finite-sample properties of the proposed tests are evaluated in a set of Monte Carlo experiments, and the distributional stability in financial markets is examined.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 666
Author(s):  
Manuel Stapper

A new software package for the Julia language, CountTimeSeries.jl, is under review, which provides likelihood based methods for integer-valued time series. The package’s functionalities are showcased in a simulation study on finite sample properties of Maximum Likelihood (ML) estimation and three real-life data applications. First, the number of newly infected COVID-19 patients is predicted. Then, previous findings on the need for overdispersion and zero inflation are reviewed in an application on animal submissions in New Zealand. Further, information criteria are used for model selection to investigate patterns in corporate insolvencies in Rhineland-Palatinate. Theoretical background and implementation details are described, and complete code for all applications is provided online. The CountTimeSeries package is available at the general Julia package registry.


2009 ◽  
Vol 26 (4) ◽  
pp. 965-993 ◽  
Author(s):  
Christian Francq ◽  
Lajos Horvath ◽  
Jean-Michel Zakoïan

We consider linearity testing in a general class of nonlinear time series models of order one, involving a nonnegative nuisance parameter that (a) is not identified under the null hypothesis and (b) gives the linear model when equal to zero. This paper studies the asymptotic distribution of the likelihood ratio test and asymptotically equivalent supremum tests. The asymptotic distribution is described as a functional of chi-square processes and is obtained without imposing a positive lower bound for the nuisance parameter. The finite-sample properties of the sup-tests are studied by simulations.


Sign in / Sign up

Export Citation Format

Share Document