Autoregressive processes with infinite variance

1977 ◽  
Vol 14 (02) ◽  
pp. 411-415 ◽  
Author(s):  
E. J. Hannan ◽  
Marek Kanter

The least squares estimators β i(N), j = 1, …, p, from N data points, of the autoregressive constants for a stationary autoregressive model are considered when the disturbances have a distribution attracted to a stable law of index α < 2. It is shown that N1/δ(β i(N) – β) converges almost surely to zero for any δ > α. Some comments are made on alternative definitions of the βi (N).

1977 ◽  
Vol 14 (2) ◽  
pp. 411-415 ◽  
Author(s):  
E. J. Hannan ◽  
Marek Kanter

The least squares estimators βi(N), j = 1, …, p, from N data points, of the autoregressive constants for a stationary autoregressive model are considered when the disturbances have a distribution attracted to a stable law of index α < 2. It is shown that N1/δ(βi(N) – β) converges almost surely to zero for any δ > α. Some comments are made on alternative definitions of the βi(N).


2008 ◽  
Vol 24 (3) ◽  
pp. 677-695 ◽  
Author(s):  
M. Zarepour ◽  
S.M. Roknossadati

We consider the limiting behavior of a vector autoregressive model of order one (VAR(1)) with independent and identically distributed (i.i.d.) innovations vector with dependent components in the domain of attraction of a multivariate stable law with possibly different indices of stability. It is shown that in some cases the ordinary least squares (OLS) estimates are inconsistent. This inconsistency basically originates from the fact that each coordinate of the partial sum processes of dependent i.i.d. vectors of innovations in the domain of attraction of stable laws needs a different normalizer to converge to a limiting process. It is also revealed that certain M-estimates, with some regularity conditions, as an appropriate alternative, not only resolve inconsistency of the OLS estimates but also give higher consistency rates in all cases.


1983 ◽  
Vol 20 (4) ◽  
pp. 737-753 ◽  
Author(s):  
C. R. Heathcote ◽  
A. H. Welsh

The stationary autoregressive model but with a long-tailed error distribution is analysed using the method of functional least squares. A family of estimators indexed by a real parameter is obtained and uniform consistency and weak convergence established. The optimum member of the family is chosen to have minimum variance with respect to the parameter, and the parameter value chosen detects and adjusts for long-tailed error distributions. Results of a simulation are given.


1983 ◽  
Vol 20 (04) ◽  
pp. 737-753 ◽  
Author(s):  
C. R. Heathcote ◽  
A. H. Welsh

The stationary autoregressive model but with a long-tailed error distribution is analysed using the method of functional least squares. A family of estimators indexed by a real parameter is obtained and uniform consistency and weak convergence established. The optimum member of the family is chosen to have minimum variance with respect to the parameter, and the parameter value chosen detects and adjusts for long-tailed error distributions. Results of a simulation are given.


1999 ◽  
Vol 15 (2) ◽  
pp. 165-176 ◽  
Author(s):  
Beong Soo So ◽  
Dong Wan Shin

For autoregressive processes, we propose new estimators whose pivotal statistics have the standard normal limiting distribution for all ranges of the autoregressive parameters. The proposed estimators are approximately median unbiased. For seasonal time series, the new estimators give us unit root tests that have limiting normal distribution regardless of period of the seasonality. Using the estimators, confidence intervals of the autoregressive parameters are constructed. A Monte-Carlo simulation for first-order autoregressions shows that the proposed tests for unit roots are locally more powerful than the tests based on the ordinary least squares estimators. It also shows that the proposed confidence intervals have shorter average lengths than those of Andrews (1993, Econometrica 61, 139–165) based on the ordinary least squares estimators when the autoregressive coefficient is close to one.


1974 ◽  
Vol 6 (4) ◽  
pp. 768-783 ◽  
Author(s):  
Marek Kanter ◽  
W. L. Steiger

The theory of the linear model is incomplete in that it fails to deal with variables possessing infinite variance. To fill an important part of this gap, we give an unbiased estimate, the “screened ratio estimate”, for λ in the regression E(X|Z) = λX; X and Z are linear combinations of independent, identically distributed symmetric random variables that are either stable or asymptotically Pareto distributed of index α ≤ 2. By way of comparison, the usual least squares estimate of λ is shown not to converge in general to any constant when α < 2. However, in the autoregression Xn = a1Xn-1 + … + akXn-k + Un, the least squares estimates are shown to be consistent as long as the roots of 1 - a1x2 - a2x2 - … - akxk = 0 are outside the complex unit circle, Xn is independent of Un+j,j ≥ 1, and the Un are independent and identically distributed and in the domain of attraction of a stable law of index a ≤ 2. Finally, the consistency of least squares estimates for finite moving averages is established.


2013 ◽  
Vol 29 (4) ◽  
pp. 699-734 ◽  
Author(s):  
Ryan Greenaway-McGrevy

This paper considers the conventional recursive (otherwise known as plug-in) and direct multistep forecasts in a panel vector autoregressive framework. We derive asymptotic expressions for the mean square prediction error (MSPE) of both forecasts as N (cross sections) and T (time periods) grow large. Both the bias and variance of the least squares fitting are manifest in the MSPE. Using these expressions, we consider the effect of model specification on predictor accuracy. When the fitted lag order (q) is equal to or exceeds the true lag order (p), the direct MSPE is larger than the recursive MSPE. On the other hand, when the fitted lag order is underspecified, the direct MSPE is smaller than the recursive MSPE. The recursive MSPE is increasing in q for all q ≥ p. In contrast, the direct MSPE is not monotonic in q within the permissible parameter space. Extensions to bias-corrected least squares estimators are considered.


2018 ◽  
Vol 8 (1) ◽  
pp. 136-148 ◽  
Author(s):  
Daniel K. Baissa ◽  
Carlisle Rainey

AbstractResearchers in political science often estimate linear models of continuous outcomes using least squares. While it is well known that least-squares estimates are sensitive to single, unusual data points, this knowledge has not led to careful practices when using least-squares estimators. Using statistical theory and Monte Carlo simulations, we highlight the importance of using more robust estimators along with variable transformations. We also discuss several approaches to detect, summarize, and communicate the influence of particular data points.


Sign in / Sign up

Export Citation Format

Share Document