scholarly journals Predicting the probability of transforming different classes of monthly droughts in Iran

Author(s):  
Peyman Mahmoudi ◽  
Allahbakhsh Rigi

Abstract The main objective of this study was to predict the transition probability of different drought classes by applying Homogenous and non- Homogenous Markov chain models. The daily precipitation data of 40 synoptic stations in Iran, for a period of 35 years (1983–2018), was used to access the study objectives. The Effective Drought Index (EDI) was applied to categorize Iran’s droughts. With the implementation of cluster analysis on the daily values of effective drought index (EDI), it was observed that Iran can be divided into five separate regions based on the behavior of the time series of the studied stations. The spatial mean of the effective drought index (EDI) of each region was also calculated. After forming the transition frequency matrix, the dependent and correlated test of data was conducted via chi-square test. The results of this test confirmed the assumption that the various drought classes are correlated in five studied regions. Eventually, after adjusting the transition probability matrix for the studied regions, the homogenous and non-homogenous Markov chains were modeled and Markov characteristics of droughts were extracted including various class probabilities of drought severity, the average expected residence time in each drought class, the expected first passage time from various classes of droughts to the wet classes, and the short-term prediction of various drought classes. Regarding these climate areas, the results showed that the probability of each category is reduced as the severity of drought increases from poor drought category to severe and very severe drought. In the non-homogeneous Markov chain, the probability of each category of drought for winter, spring, and fall indicated that the probability of weak drought category is more than other categories. Since the obtained anticipating results are dependent on the early months, they were more accurate than those of the homogeneous Markov chain. In general, both Markov chains showed favorable results that can be very useful for water resource planners.

1997 ◽  
Vol 34 (4) ◽  
pp. 847-858 ◽  
Author(s):  
James Ledoux

We consider weak lumpability of finite homogeneous Markov chains, which is when a lumped Markov chain with respect to a partition of the initial state space is also a homogeneous Markov chain. We show that weak lumpability is equivalent to the existence of a direct sum of polyhedral cones that is positively invariant by the transition probability matrix of the original chain. It allows us, in a unified way, to derive new results on lumpability of reducible Markov chains and to obtain spectral properties associated with lumpability.


1997 ◽  
Vol 34 (04) ◽  
pp. 847-858 ◽  
Author(s):  
James Ledoux

We consider weak lumpability of finite homogeneous Markov chains, which is when a lumped Markov chain with respect to a partition of the initial state space is also a homogeneous Markov chain. We show that weak lumpability is equivalent to the existence of a direct sum of polyhedral cones that is positively invariant by the transition probability matrix of the original chain. It allows us, in a unified way, to derive new results on lumpability of reducible Markov chains and to obtain spectral properties associated with lumpability.


Author(s):  
Peter L. Chesson

AbstractRandom transition probability matrices with stationary independent factors define “white noise” environment processes for Markov chains. Two examples are considered in detail. Such environment processes can be used to construct several Markov chains which are dependent, have the same transition probabilities and are jointly a Markov chain. Transition rates for such processes are evaluated. These results have application to the study of animal movements.


1981 ◽  
Vol 18 (3) ◽  
pp. 747-751
Author(s):  
Stig I. Rosenlund

For a time-homogeneous continuous-parameter Markov chain we show that as t → 0 the transition probability pn,j (t) is at least of order where r(n, j) is the minimum number of jumps needed for the chain to pass from n to j. If the intensities of passage are bounded over the set of states which can be reached from n via fewer than r(n, j) jumps, this is the exact order.


1982 ◽  
Vol 19 (3) ◽  
pp. 692-694 ◽  
Author(s):  
Mark Scott ◽  
Barry C. Arnold ◽  
Dean L. Isaacson

Characterizations of strong ergodicity for Markov chains using mean visit times have been found by several authors (Huang and Isaacson (1977), Isaacson and Arnold (1978)). In this paper a characterization of uniform strong ergodicity for a continuous-time non-homogeneous Markov chain is given. This extends the characterization, using mean visit times, that was given by Isaacson and Arnold.


1991 ◽  
Vol 4 (4) ◽  
pp. 293-303
Author(s):  
P. Todorovic

Let {ξn} be a non-decreasing stochastically monotone Markov chain whose transition probability Q(.,.) has Q(x,{x})=β(x)>0 for some function β(.) that is non-decreasing with β(x)↑1 as x→+∞, and each Q(x,.) is non-atomic otherwise. A typical realization of {ξn} is a Markov renewal process {(Xn,Tn)}, where ξj=Xn, for Tn consecutive values of j, Tn geometric on {1,2,…} with parameter β(Xn). Conditions are given for Xn, to be relatively stable and for Tn to be weakly convergent.


2000 ◽  
Vol 37 (03) ◽  
pp. 795-806 ◽  
Author(s):  
Laurent Truffet

We propose in this paper two methods to compute Markovian bounds for monotone functions of a discrete time homogeneous Markov chain evolving in a totally ordered state space. The main interest of such methods is to propose algorithms to simplify analysis of transient characteristics such as the output process of a queue, or sojourn time in a subset of states. Construction of bounds are based on two kinds of results: well-known results on stochastic comparison between Markov chains with the same state space; and the fact that in some cases a function of Markov chain is again a homogeneous Markov chain but with smaller state space. Indeed, computation of bounds uses knowledge on the whole initial model. However, only part of this data is necessary at each step of the algorithms.


1998 ◽  
Vol 35 (3) ◽  
pp. 545-556 ◽  
Author(s):  
Masaaki Kijima

A continuous-time Markov chain on the non-negative integers is called skip-free to the right (left) if only unit increments to the right (left) are permitted. If a Markov chain is skip-free both to the right and to the left, it is called a birth–death process. Karlin and McGregor (1959) showed that if a continuous-time Markov chain is monotone in the sense of likelihood ratio ordering then it must be an (extended) birth–death process. This paper proves that if an irreducible Markov chain in continuous time is monotone in the sense of hazard rate (reversed hazard rate) ordering then it must be skip-free to the right (left). A birth–death process is then characterized as a continuous-time Markov chain that is monotone in the sense of both hazard rate and reversed hazard rate orderings. As an application, the first-passage-time distributions of such Markov chains are also studied.


1983 ◽  
Vol 20 (3) ◽  
pp. 482-504 ◽  
Author(s):  
C. Cocozza-Thivent ◽  
C. Kipnis ◽  
M. Roussignol

We investigate how the property of null-recurrence is preserved for Markov chains under a perturbation of the transition probability. After recalling some useful criteria in terms of the one-step transition nucleus we present two methods to determine barrier functions, one in terms of taboo potentials for the unperturbed Markov chain, and the other based on Taylor's formula.


2019 ◽  
Vol 44 (3) ◽  
pp. 282-308 ◽  
Author(s):  
Brian G. Vegetabile ◽  
Stephanie A. Stout-Oswald ◽  
Elysia Poggi Davis ◽  
Tallie Z. Baram ◽  
Hal S. Stern

Predictability of behavior is an important characteristic in many fields including biology, medicine, marketing, and education. When a sequence of actions performed by an individual can be modeled as a stationary time-homogeneous Markov chain the predictability of the individual’s behavior can be quantified by the entropy rate of the process. This article compares three estimators of the entropy rate of finite Markov processes. The first two methods directly estimate the entropy rate through estimates of the transition matrix and stationary distribution of the process. The third method is related to the sliding-window Lempel–Ziv compression algorithm. The methods are compared via a simulation study and in the context of a study of interactions between mothers and their children.


Sign in / Sign up

Export Citation Format

Share Document