On the uniform ergodicity of Markov processes of order 2

2003 ◽  
Vol 40 (2) ◽  
pp. 455-472 ◽  
Author(s):  
Ulrich Herkenrath

We study the uniform ergodicity of Markov processes (Zn, n ≥ 1) of order 2 with a general state space (Z, 𝒵). Markov processes of order higher than 1 were defined in the literature long ago, but scarcely treated in detail. We take as the basis for our considerations the natural transition probability Q of such a process. A Markov process of order 2 is transformed into one of order 1 by combining two consecutive variables Z2n–1 and Z2n into one variable Yn with values in the Cartesian product space (Z × Z, 𝒵 ⊗ 𝒵). Thus, a Markov process (Yn, n ≥ 1) of order 1 with transition probability R is generated. Uniform ergodicity for the process (Zn, n ≥ 1) is defined in terms of the same property for (Yn, n ≥ 1). We give some conditions on the transition probability Q which transfer to R and thus ensure the uniform ergodicity of (Zn, n ≥ 1). We apply the general results to study the uniform ergodicity of Markov processes of order 2 which arise in some nonlinear time series models and as sequences of smoothed values in sequential smoothing procedures of Markovian observations. As for the time series models, Markovian noise sequences are covered.

2003 ◽  
Vol 40 (02) ◽  
pp. 455-472 ◽  
Author(s):  
Ulrich Herkenrath

We study the uniform ergodicity of Markov processes (Z n , n ≥ 1) of order 2 with a general state space (Z, 𝒵). Markov processes of order higher than 1 were defined in the literature long ago, but scarcely treated in detail. We take as the basis for our considerations the natural transition probability Q of such a process. A Markov process of order 2 is transformed into one of order 1 by combining two consecutive variables Z 2n–1 and Z 2n into one variable Y n with values in the Cartesian product space (Z × Z, 𝒵 ⊗ 𝒵). Thus, a Markov process (Y n , n ≥ 1) of order 1 with transition probability R is generated. Uniform ergodicity for the process (Z n , n ≥ 1) is defined in terms of the same property for (Y n , n ≥ 1). We give some conditions on the transition probability Q which transfer to R and thus ensure the uniform ergodicity of (Z n , n ≥ 1). We apply the general results to study the uniform ergodicity of Markov processes of order 2 which arise in some nonlinear time series models and as sequences of smoothed values in sequential smoothing procedures of Markovian observations. As for the time series models, Markovian noise sequences are covered.


1966 ◽  
Vol 3 (1) ◽  
pp. 48-54 ◽  
Author(s):  
William F. Massy

Most empirical work on Markov processes for brand choice has been based on aggregative data. This article explores the validity of the crucial assumption that underlies such analyses, i.e., that all the families in the sample follow a Markov process with the same or similar transition probability matrices. The results show that there is a great deal of diversity among families’ switching processes, and that many of them are of zero rather than first order.


2010 ◽  
Vol 42 (04) ◽  
pp. 986-993 ◽  
Author(s):  
Muhamad Azfar Ramli ◽  
Gerard Leng

In this paper we generalize a bounded Markov process, described by Stoyanov and Pacheco-González for a class of transition probability functions. A recursive integral equation for the probability density of these bounded Markov processes is derived and the stationary probability density is obtained by solving an equivalent differential equation. Examples of stationary densities for different transition probability functions are given and an application for designing a robotic coverage algorithm with specific emphasis on particular regions is discussed.


1970 ◽  
Vol 7 (02) ◽  
pp. 388-399 ◽  
Author(s):  
C. K. Cheong

Our main concern in this paper is the convergence, as t → ∞, of the quantities i, j ∈ E; where Pij (t) is the transition probability of a semi-Markov process whose state space E is irreducible but not closed (i.e., escape from E is possible), and rj is the probability of eventual escape from E conditional on the initial state being i. The theorems proved here generalize some results of Seneta and Vere-Jones ([8] and [11]) for Markov processes.


2014 ◽  
Vol 91 (1) ◽  
pp. 134-144 ◽  
Author(s):  
F. ABTAHI ◽  
A. GHAFARPANAH ◽  
A. REJALI

AbstractLet ${\it\varphi}$ be a homomorphism from a Banach algebra ${\mathcal{B}}$ to a Banach algebra ${\mathcal{A}}$. We define a multiplication on the Cartesian product space ${\mathcal{A}}\times {\mathcal{B}}$ and obtain a new Banach algebra ${\mathcal{A}}\times _{{\it\varphi}}{\mathcal{B}}$. We show that biprojectivity as well as biflatness of ${\mathcal{A}}\times _{{\it\varphi}}{\mathcal{B}}$ are stable with respect to ${\it\varphi}$.


1970 ◽  
Vol 7 (2) ◽  
pp. 388-399 ◽  
Author(s):  
C. K. Cheong

Our main concern in this paper is the convergence, as t → ∞, of the quantities i, j ∈ E; where Pij(t) is the transition probability of a semi-Markov process whose state space E is irreducible but not closed (i.e., escape from E is possible), and rj is the probability of eventual escape from E conditional on the initial state being i. The theorems proved here generalize some results of Seneta and Vere-Jones ([8] and [11]) for Markov processes.


1964 ◽  
Vol 24 ◽  
pp. 177-204 ◽  
Author(s):  
Masao Nagasawa

A time reversion of a Markov process was discussed by Kolmogoroff for Markov chains in 1936 [6] and for a diffusion in 1937 [7l He described it as a process having an adjoint transition probability. Although his treatment is purely analytical, in his case if the process xt has an invariant distribution, the reversed process zt = x-t is the process with the adjoint transition probability. In this discussion, however, it is very restrictive that the initial distribution of the process must be an invariant measure.


2021 ◽  
Vol 5 (3) ◽  
pp. 69
Author(s):  
Pasupathi Rajan ◽  
María A. Navascués ◽  
Arya Kumar Bedabrata Chand

The theory of iterated function systems (IFSs) has been an active area of research on fractals and various types of self-similarity in nature. The basic theoretical work on IFSs has been proposed by Hutchinson. In this paper, we introduce a new generalization of Hutchinson IFS, namely generalized θ-contraction IFS, which is a finite collection of generalized θ-contraction functions T1,…,TN from finite Cartesian product space X×⋯×X into X, where (X,d) is a complete metric space. We prove the existence of attractor for this generalized IFS. We show that the Hutchinson operators for countable and multivalued θ-contraction IFSs are Picard. Finally, when the map θ is continuous, we show the relation between the code space and the attractor of θ-contraction IFS.


Sign in / Sign up

Export Citation Format

Share Document