NULL RECURRENT UNIT ROOT PROCESSES

2011 ◽  
Vol 28 (1) ◽  
pp. 1-41 ◽  
Author(s):  
Terje Myklebust ◽  
Hans Arnfinn Karlsen ◽  
Dag Tjøstheim

The classical nonstationary autoregressive models are both linear and Markov. They include unit root and cointegration models. A possible nonlinear extension is to relax the linearity and at the same time keep general properties such as nonstationarity and the Markov property. A null recurrent Markov chain is nonstationary, and β-null recurrence is of vital importance for statistical inference in nonstationary Markov models, such as, e.g., in nonparametric estimation in nonlinear cointegration within the Markov models. The standard random walk is an example of a null recurrent Markov chain.In this paper we suggest that the concept of null recurrence is an appropriate nonlinear generalization of the linear unit root concept and as such it may be a starting point for a nonlinear cointegration concept within the Markov framework. In fact, we establish the link between null recurrent processes and autoregressive unit root models. It turns out that null recurrence is closely related to the location of the roots of the characteristic polynomial of the state space matrix and the associated eigenvectors. Roughly speaking the process is β-null recurrent if one root is on the unit circle, null recurrent if two distinct roots are on the unit circle, whereas the others are inside the unit circle. It is transient if there are more than two roots on the unit circle. These results are closely connected to the random walk being null recurrent in one and two dimensions but transient in three dimensions. We also give an example of a process that by appropriate adjustments can be made β-null recurrent for any β ∈ (0, 1) and can also be made null recurrent without being β-null recurrent.

2006 ◽  
Vol 13 (3) ◽  
pp. 339-352 ◽  
Author(s):  
Ath. Kehagias ◽  
V. Fortin

Abstract. We present a new family of hidden Markov models and apply these to the segmentation of hydrological and environmental time series. The proposed hidden Markov models have a discrete state space and their structure is inspired from the shifting means models introduced by Chernoff and Zacks and by Salas and Boes. An estimation method inspired from the EM algorithm is proposed, and we show that it can accurately identify multiple change-points in a time series. We also show that the solution obtained using this algorithm can serve as a starting point for a Monte-Carlo Markov chain Bayesian estimation method, thus reducing the computing time needed for the Markov chain to converge to a stationary distribution.


Author(s):  
C. Domb

Consider a random-walk problem on a simple lattice, the probabilities of the walker taking any direction in the lattice at each lattice point being equal. Then Polya (6) has shown that if a walker starts at the origin and continues to walk indefinitely, the probability of his passing through his starting point is unity in one and two dimensions, but less than unity in three or more dimensions. Recently, a generalization of this problem has been considered (1),(3) in which the walker is allowed to jump several lattice points with assigned probabilities. F. G. Foster and I. J. Good (3) have shown that if the assigned probabilities satisfy certain conditions, Polya's result still holds, and K. L. Chung and W. H. J. Fuchs (1) have shown that the result is valid under far less restrictive conditions. The above authors were primarily concerned with the question whether return is almost certain or not, and did not consider a detailed calculation of the probability at any stage. It is the purpose of the present paper to show that the use of contour integrals allied with the method of steepest descents (4) enables one to perform this calculation very simply.


2021 ◽  
pp. 1-59
Author(s):  
Sébastien Laurent ◽  
Shuping Shi

Deviations of asset prices from the random walk dynamic imply the predictability of asset returns and thus have important implications for portfolio construction and risk management. This paper proposes a real-time monitoring device for such deviations using intraday high-frequency data. The proposed procedures are based on unit root tests with in-fill asymptotics but extended to take the empirical features of high-frequency financial data (particularly jumps) into consideration. We derive the limiting distributions of the tests under both the null hypothesis of a random walk with jumps and the alternative of mean reversion/explosiveness with jumps. The limiting results show that ignoring the presence of jumps could potentially lead to severe size distortions of both the standard left-sided (against mean reversion) and right-sided (against explosiveness) unit root tests. The simulation results reveal satisfactory performance of the proposed tests even with data from a relatively short time span. As an illustration, we apply the procedure to the Nasdaq composite index at the 10-minute frequency over two periods: around the peak of the dot-com bubble and during the 2015–2106 stock market sell-off. We find strong evidence of explosiveness in asset prices in late 1999 and mean reversion in late 2015. We also show that accounting for jumps when testing the random walk hypothesis on intraday data is empirically relevant and that ignoring jumps can lead to different conclusions.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nikolaos Halidias

Abstract In this note we study the probability and the mean time for absorption for discrete time Markov chains. In particular, we are interested in estimating the mean time for absorption when absorption is not certain and connect it with some other known results. Computing a suitable probability generating function, we are able to estimate the mean time for absorption when absorption is not certain giving some applications concerning the random walk. Furthermore, we investigate the probability for a Markov chain to reach a set A before reach B generalizing this result for a sequence of sets A 1 , A 2 , … , A k {A_{1},A_{2},\dots,A_{k}} .


1978 ◽  
Vol 15 (1) ◽  
pp. 65-77 ◽  
Author(s):  
Anthony G. Pakes

This paper develops the notion of the limiting age of an absorbing Markov chain, conditional on the present state. Chains with a single absorbing state {0} are considered and with such a chain can be associated a return chain, obtained by restarting the original chain at a fixed state after each absorption. The limiting age, A(j), is the weak limit of the time given Xn = j (n → ∞).A criterion for the existence of this limit is given and this is shown to be fulfilled in the case of the return chains constructed from the Galton–Watson process and the left-continuous random walk. Limit theorems for A (J) (J → ∞) are given for these examples.


2011 ◽  
Vol 43 (3) ◽  
pp. 782-813 ◽  
Author(s):  
M. Jara ◽  
T. Komorowski

In this paper we consider the scaled limit of a continuous-time random walk (CTRW) based on a Markov chain {Xn,n≥ 0} and two observables, τ(∙) andV(∙), corresponding to the renewal times and jump sizes. Assuming that these observables belong to the domains of attraction of some stable laws, we give sufficient conditions on the chain that guarantee the existence of the scaled limits for CTRWs. An application of the results to a process that arises in quantum transport theory is provided. The results obtained in this paper generalize earlier results contained in Becker-Kern, Meerschaert and Scheffler (2004) and Meerschaert and Scheffler (2008), and the recent results of Henry and Straka (2011) and Jurlewicz, Kern, Meerschaert and Scheffler (2010), where {Xn,n≥ 0} is a sequence of independent and identically distributed random variables.


2010 ◽  
Vol 10 (5&6) ◽  
pp. 509-524
Author(s):  
M. Mc Gettrick

We investigate the quantum versions of a one-dimensional random walk, whose corresponding Markov Chain is of order 2. This corresponds to the walk having a memory of one previous step. We derive the amplitudes and probabilities for these walks, and point out how they differ from both classical random walks, and quantum walks without memory.


Sign in / Sign up

Export Citation Format

Share Document