scholarly journals First Hitting Problems for Markov Chains That Converge to a Geometric Brownian Motion

2011 ◽  
Vol 2011 ◽  
pp. 1-15 ◽  
Author(s):  
Mario Lefebvre ◽  
Moussa Kounta

We consider a discrete-time Markov chain with state space {1,1+Δx,…,1+kΔx=N}. We compute explicitly the probability pj that the chain, starting from 1+jΔx, will hit N before 1, as well as the expected number dj of transitions needed to end the game. In the limit when Δx and the time Δt between the transitions decrease to zero appropriately, the Markov chain tends to a geometric Brownian motion. We show that pj and djΔt tend to the corresponding quantities for the geometric Brownian motion.

1984 ◽  
Vol 21 (03) ◽  
pp. 567-574 ◽  
Author(s):  
Atef M. Abdel-Moneim ◽  
Frederick W. Leysieffer

Conditions under which a function of a finite, discrete-time Markov chain, X(t), is again Markov are given, when X(t) is not irreducible. These conditions are given in terms of an interrelationship between two partitions of the state space of X(t), the partition induced by the minimal essential classes of X(t) and the partition with respect to which lumping is to be considered.


1984 ◽  
Vol 21 (3) ◽  
pp. 567-574 ◽  
Author(s):  
Atef M. Abdel-Moneim ◽  
Frederick W. Leysieffer

Conditions under which a function of a finite, discrete-time Markov chain, X(t), is again Markov are given, when X(t) is not irreducible. These conditions are given in terms of an interrelationship between two partitions of the state space of X(t), the partition induced by the minimal essential classes of X(t) and the partition with respect to which lumping is to be considered.


2000 ◽  
Vol 37 (03) ◽  
pp. 795-806 ◽  
Author(s):  
Laurent Truffet

We propose in this paper two methods to compute Markovian bounds for monotone functions of a discrete time homogeneous Markov chain evolving in a totally ordered state space. The main interest of such methods is to propose algorithms to simplify analysis of transient characteristics such as the output process of a queue, or sojourn time in a subset of states. Construction of bounds are based on two kinds of results: well-known results on stochastic comparison between Markov chains with the same state space; and the fact that in some cases a function of Markov chain is again a homogeneous Markov chain but with smaller state space. Indeed, computation of bounds uses knowledge on the whole initial model. However, only part of this data is necessary at each step of the algorithms.


1980 ◽  
Vol 17 (1) ◽  
pp. 33-46 ◽  
Author(s):  
S. Tavaré

The connection between the age distribution of a discrete-time Markov chain and a certain time-reversed Markov chain is exhibited. A method for finding properties of age distributions follows simply from this approach. The results, which have application in several areas in applied probability, are illustrated by examples from population genetics.


2005 ◽  
Vol 2005 (3) ◽  
pp. 345-351
Author(s):  
Lakhdar Aggoun

We consider a discrete-time Markov chain observed through another Markov chain. The proposed model extends models discussed by Elliott et al. (1995). We propose improved recursive formulae to update smoothed estimates of processes related to the model. These recursive estimates are used to update the parameter of the model via the expectation maximization (EM) algorithm.


Author(s):  
Marcel F. Neuts

We consider a stationary discrete-time Markov chain with a finite number m of possible states which we designate by 1,…,m. We assume that at time t = 0 the process is in an initial state i with probability (i = 1,…, m) and such that and .


2000 ◽  
Vol 37 (3) ◽  
pp. 795-806 ◽  
Author(s):  
Laurent Truffet

We propose in this paper two methods to compute Markovian bounds for monotone functions of a discrete time homogeneous Markov chain evolving in a totally ordered state space. The main interest of such methods is to propose algorithms to simplify analysis of transient characteristics such as the output process of a queue, or sojourn time in a subset of states. Construction of bounds are based on two kinds of results: well-known results on stochastic comparison between Markov chains with the same state space; and the fact that in some cases a function of Markov chain is again a homogeneous Markov chain but with smaller state space. Indeed, computation of bounds uses knowledge on the whole initial model. However, only part of this data is necessary at each step of the algorithms.


Author(s):  
Antonio Fernandez-Morales

This paper describes the application of an online interactive simulator of discrete-time Markov chains to an automobile insurance model. Based on the D3.js library, an interactive visual animation depicts the dynamics of individual policyholders in a bonus-malus system of automobile insurance. A survey was conducted among MSc students who used the simulator to obtain a preliminary assessment of the perceived usefulness in several dimensions of their learning process. The main findings indicate that flexible access via different devices was the most valued feature of this resource. In addition, the possibility of experimenting and simulating by means of controlling the main parameter of the model was also found to be particularly useful.


1980 ◽  
Vol 17 (01) ◽  
pp. 33-46 ◽  
Author(s):  
S. Tavaré

The connection between the age distribution of a discrete-time Markov chain and a certain time-reversed Markov chain is exhibited. A method for finding properties of age distributions follows simply from this approach. The results, which have application in several areas in applied probability, are illustrated by examples from population genetics.


Sign in / Sign up

Export Citation Format

Share Document