The Study on Convergence and Convergence Rate of Genetic Algorithm Based on an Absorbing Markov Chain

2012 ◽  
Vol 239-240 ◽  
pp. 1511-1515 ◽  
Author(s):  
Jing Jiang ◽  
Li Dong Meng ◽  
Xiu Mei Xu

The study on convergence of GA is always one of the most important theoretical issues. This paper analyses the sufficient condition which guarantees the convergence of GA. Via analyzing the convergence rate of GA, the average computational complexity can be implied and the optimization efficiency of GA can be judged. This paper proposes the approach to calculating the first expected hitting time and analyzes the bounds of the first hitting time of concrete GA using the proposed approach.

1980 ◽  
Vol 17 (03) ◽  
pp. 716-725
Author(s):  
Manish C. Bhattacharjee ◽  
Sujit K. Basu

For a Markov chain with optional transitions, except for those to an arbitrary fixed state accessible from all others, Kesten and Spitzer proved the existence of a control policy which minimized the expected time to reach the fixed state and for constructing an optimal policy, proposed an algorithm which works in certain cases. For the algorithm to work they gave a sufficient condition which breaks down if there are countably many states and the minimal hitting time is bounded. We propose a modified algorithm which is shown to work under a weaker sufficient condition. In the bounded case with countably many states, the proposed sufficient condition is not necessary but a similar condition is. In the unbounded case as well as when the state space is finite, the proposed condition is shown to be both necessary and sufficient for the original Kesten–Spitzer algorithm to work. A new iterative algorithm which can be used in all cases is given.


2008 ◽  
Vol 45 (03) ◽  
pp. 640-649
Author(s):  
Victor de la Peña ◽  
Henryk Gzyl ◽  
Patrick McDonald

Let W n be a simple Markov chain on the integers. Suppose that X n is a simple Markov chain on the integers whose transition probabilities coincide with those of W n off a finite set. We prove that there is an M > 0 such that the Markov chain W n and the joint distributions of the first hitting time and first hitting place of X n started at the origin for the sets {-M, M} and {-(M + 1), (M + 1)} algorithmically determine the transition probabilities of X n .


1980 ◽  
Vol 17 (3) ◽  
pp. 716-725
Author(s):  
Manish C. Bhattacharjee ◽  
Sujit K. Basu

For a Markov chain with optional transitions, except for those to an arbitrary fixed state accessible from all others, Kesten and Spitzer proved the existence of a control policy which minimized the expected time to reach the fixed state and for constructing an optimal policy, proposed an algorithm which works in certain cases. For the algorithm to work they gave a sufficient condition which breaks down if there are countably many states and the minimal hitting time is bounded. We propose a modified algorithm which is shown to work under a weaker sufficient condition. In the bounded case with countably many states, the proposed sufficient condition is not necessary but a similar condition is. In the unbounded case as well as when the state space is finite, the proposed condition is shown to be both necessary and sufficient for the original Kesten–Spitzer algorithm to work. A new iterative algorithm which can be used in all cases is given.


2017 ◽  
Vol 2017 ◽  
pp. 1-7 ◽  
Author(s):  
Moussa Kounta

We investigate the probability of the first hitting time of some discrete Markov chain that converges weakly to the Bessel process. Both the probability that the chain will hit a given boundary before the other and the average number of transitions are computed explicitly. Furthermore, we show that the quantities that we obtained tend (with the Euclidian metric) to the corresponding ones for the Bessel process.


Author(s):  
WENMING HONG ◽  
HUAMING WANG

We figure out the intrinsic branching structure within (L-1) random walk in random environment. As applications, the branching structure enable us to calculate the expectation of the first hitting time directly, and specify the density of the invariant measure for the Markov chain of "the environment viewed from particles" explicitly.


2008 ◽  
Vol 45 (3) ◽  
pp. 640-649
Author(s):  
Victor de la Peña ◽  
Henryk Gzyl ◽  
Patrick McDonald

Let Wn be a simple Markov chain on the integers. Suppose that Xn is a simple Markov chain on the integers whose transition probabilities coincide with those of Wn off a finite set. We prove that there is an M > 0 such that the Markov chain Wn and the joint distributions of the first hitting time and first hitting place of Xn started at the origin for the sets {-M, M} and {-(M + 1), (M + 1)} algorithmically determine the transition probabilities of Xn.


2009 ◽  
Vol 79 (23) ◽  
pp. 2422-2428 ◽  
Author(s):  
Ken Jackson ◽  
Alexander Kreinin ◽  
Wanhe Zhang

1978 ◽  
Vol 15 (1) ◽  
pp. 65-77 ◽  
Author(s):  
Anthony G. Pakes

This paper develops the notion of the limiting age of an absorbing Markov chain, conditional on the present state. Chains with a single absorbing state {0} are considered and with such a chain can be associated a return chain, obtained by restarting the original chain at a fixed state after each absorption. The limiting age, A(j), is the weak limit of the time given Xn = j (n → ∞).A criterion for the existence of this limit is given and this is shown to be fulfilled in the case of the return chains constructed from the Galton–Watson process and the left-continuous random walk. Limit theorems for A (J) (J → ∞) are given for these examples.


Sign in / Sign up

Export Citation Format

Share Document