Markov Tail Chains

2014 ◽  
Vol 51 (04) ◽  
pp. 1133-1153
Author(s):  
A. Janssen ◽  
J. Segers

The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions in R d . We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In a broader setting, we will show that even for non-Markovian underlying processes a Markovian forward tail chain always implies that the backward tail chain is also Markovian. We analyze the resulting class of limiting processes in detail. Applications of the theory yield the asymptotic distribution of both the past and the future of univariate and multivariate stochastic difference equations conditioned on an extreme event.

2014 ◽  
Vol 51 (4) ◽  
pp. 1133-1153 ◽  
Author(s):  
A. Janssen ◽  
J. Segers

The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions inRd. We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In a broader setting, we will show that even for non-Markovian underlying processes a Markovian forward tail chain always implies that the backward tail chain is also Markovian. We analyze the resulting class of limiting processes in detail. Applications of the theory yield the asymptotic distribution of both the past and the future of univariate and multivariate stochastic difference equations conditioned on an extreme event.


2014 ◽  
Vol 51 (04) ◽  
pp. 1133-1153 ◽  
Author(s):  
A. Janssen ◽  
J. Segers

The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions in R d . We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In a broader setting, we will show that even for non-Markovian underlying processes a Markovian forward tail chain always implies that the backward tail chain is also Markovian. We analyze the resulting class of limiting processes in detail. Applications of the theory yield the asymptotic distribution of both the past and the future of univariate and multivariate stochastic difference equations conditioned on an extreme event.


2011 ◽  
Vol 48 (3) ◽  
pp. 766-777 ◽  
Author(s):  
Kouji Yano ◽  
Kenji Yasutomi

An ergodic Markov chain is proved to be the realization of a random walk in a directed graph subject to a synchronizing road coloring. The result ensures the existence of appropriate random mappings in Propp-Wilson's coupling from the past. The proof is based on the road coloring theorem. A necessary and sufficient condition for approximate preservation of entropies is also given.


2011 ◽  
Vol 48 (03) ◽  
pp. 766-777
Author(s):  
Kouji Yano ◽  
Kenji Yasutomi

An ergodic Markov chain is proved to be the realization of a random walk in a directed graph subject to a synchronizing road coloring. The result ensures the existence of appropriate random mappings in Propp-Wilson's coupling from the past. The proof is based on the road coloring theorem. A necessary and sufficient condition for approximate preservation of entropies is also given.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nikolaos Halidias

Abstract In this note we study the probability and the mean time for absorption for discrete time Markov chains. In particular, we are interested in estimating the mean time for absorption when absorption is not certain and connect it with some other known results. Computing a suitable probability generating function, we are able to estimate the mean time for absorption when absorption is not certain giving some applications concerning the random walk. Furthermore, we investigate the probability for a Markov chain to reach a set A before reach B generalizing this result for a sequence of sets A 1 , A 2 , … , A k {A_{1},A_{2},\dots,A_{k}} .


1978 ◽  
Vol 15 (1) ◽  
pp. 65-77 ◽  
Author(s):  
Anthony G. Pakes

This paper develops the notion of the limiting age of an absorbing Markov chain, conditional on the present state. Chains with a single absorbing state {0} are considered and with such a chain can be associated a return chain, obtained by restarting the original chain at a fixed state after each absorption. The limiting age, A(j), is the weak limit of the time given Xn = j (n → ∞).A criterion for the existence of this limit is given and this is shown to be fulfilled in the case of the return chains constructed from the Galton–Watson process and the left-continuous random walk. Limit theorems for A (J) (J → ∞) are given for these examples.


2018 ◽  
Vol 10 (10) ◽  
pp. 3421 ◽  
Author(s):  
Rahel Hamad ◽  
Heiko Balzter ◽  
Kamal Kolo

Multi-temporal Landsat images from Landsat 5 Thematic Mapper (TM) acquired in 1993, 1998, 2003 and 2008 and Landsat 8 Operational Land Imager (OLI) from 2017, are used for analysing and predicting the spatio-temporal distributions of land use/land cover (LULC) categories in the Halgurd-Sakran Core Zone (HSCZ) of the National Park in the Kurdistan region of Iraq. The aim of this article was to explore the LULC dynamics in the HSCZ to assess where LULC changes are expected to occur under two different business-as-usual (BAU) assumptions. Two scenarios have been assumed in the present study. The first scenario, addresses the BAU assumption to show what would happen if the past trend in 1993–1998–2003 has continued until 2023 under continuing the United Nations (UN) sanctions against Iraq and particularly Kurdistan region, which extended from 1990 to 2003. Whereas, the second scenario represents the BAU assumption to show what would happen if the past trend in 2003–2008–2017 has to continue until 2023, viz. after the end of UN sanctions. Future land use changes are simulated to the year 2023 using a Cellular Automata (CA)-Markov chain model under two different scenarios (Iraq under siege and Iraq after siege). Four LULC classes were classified from Landsat using Random Forest (RF). Their accuracy was evaluated using κ and overall accuracy. The CA-Markov chain method in TerrSet is applied based on the past trends of the land use changes from 1993 to 1998 for the first scenario and from 2003 to 2008 for the second scenario. Based on this model, predicted land use maps for the 2023 are generated. Changes between two BAU scenarios under two different conditions have been quantitatively as well as spatially analysed. Overall, the results suggest a trend towards stable and homogeneous areas in the next 6 years as shown in the second scenario. This situation will have positive implication on the park.


2011 ◽  
Vol 43 (3) ◽  
pp. 782-813 ◽  
Author(s):  
M. Jara ◽  
T. Komorowski

In this paper we consider the scaled limit of a continuous-time random walk (CTRW) based on a Markov chain {Xn,n≥ 0} and two observables, τ(∙) andV(∙), corresponding to the renewal times and jump sizes. Assuming that these observables belong to the domains of attraction of some stable laws, we give sufficient conditions on the chain that guarantee the existence of the scaled limits for CTRWs. An application of the results to a process that arises in quantum transport theory is provided. The results obtained in this paper generalize earlier results contained in Becker-Kern, Meerschaert and Scheffler (2004) and Meerschaert and Scheffler (2008), and the recent results of Henry and Straka (2011) and Jurlewicz, Kern, Meerschaert and Scheffler (2010), where {Xn,n≥ 0} is a sequence of independent and identically distributed random variables.


Author(s):  
Afdelia Novianti ◽  
Dina Tri Utari

Java Island is one of the areas that is very fertile and densely populated, but on the other hand, Java Island is also one of the areas that is most frequently hit by natural disasters, one of which is Klaten Regency. Natural disaster itself is an event that threatens and disrupts human life caused by nature. Some of the natural disasters that often occur simultaneously in Klaten Regency are floods, landslides, and hurricanes. These three disasters usually occur during the rainy season. This of course makes the government need to take action by seeing the large chance of a disaster occurring in order to optimize disaster management. Then research will be carried out that aims to determine the chances of natural disasters occurring in the next few years. Forecasting will be carried out using the Markov chain method, with this method the probability value of the future period can be estimated using the current period probability value based on the characteristics of the past period. So that the value of the steady state chance of floods and landslides in period 36 (December 2023) and hurricanes in period 15 (March 2022) with the chances of a disaster are 34.21%, 15.38%, and 73.53%, respectively.Received August 31, 2021Revised October 27, 2021Accepted November 11, 2021


2010 ◽  
Vol 10 (5&6) ◽  
pp. 509-524
Author(s):  
M. Mc Gettrick

We investigate the quantum versions of a one-dimensional random walk, whose corresponding Markov Chain is of order 2. This corresponds to the walk having a memory of one previous step. We derive the amplitudes and probabilities for these walks, and point out how they differ from both classical random walks, and quantum walks without memory.


Sign in / Sign up

Export Citation Format

Share Document