markov operator
Recently Published Documents


TOTAL DOCUMENTS

49
(FIVE YEARS 8)

H-INDEX

7
(FIVE YEARS 0)

Nonlinearity ◽  
2021 ◽  
Vol 35 (1) ◽  
pp. 66-83
Author(s):  
Fumihiko Nakamura ◽  
Yushi Nakano ◽  
Hisayoshi Toyokawa

Abstract We consider generalized definitions of mixing and exactness for random dynamical systems in terms of Markov operator cocycles. We first give six fundamental definitions of mixing for Markov operator cocycles in view of observations of the randomness in environments, and reduce them into two different groups. Secondly, we give the definition of exactness for Markov operator cocycles and show that Lin’s criterion for exactness can be naturally extended to the case of Markov operator cocycles. Finally, in the class of asymptotically periodic Markov operator cocycles, we show the Lasota–Mackey type equivalence between mixing, exactness and asymptotic stability.


2021 ◽  
Vol 11 (1) ◽  
pp. 225-242
Author(s):  
Peter Bugiel ◽  
Stanisław Wędrychowicz ◽  
Beata Rzepka

Abstract Asymptotic properties of the sequences (a) { P j } j = 1 ∞ $\{P^{j}\}_{j=1}^{\infty}$ and (b) { j − 1 ∑ i = 0 j − 1 P i } j = 1 ∞ $\{ j^{-1} \sum _{i=0}^{j-1} P^{i}\}_{j=1}^{\infty}$ are studied for g ∈ G = {f ∈ L 1(I) : f ≥ 0 and ‖f ‖ = 1}, where P : L 1(I) → L 1(I) is a Markov operator defined by P f := ∫ P y f d p ( y ) $Pf:= \int P_{y}f\, dp(y) $ for f ∈ L 1; {Py } y∈Y is the family of the Frobenius-Perron operators associated with a family {φy } y∈Y of nonsingular Markov maps defined on a subset I ⊆ ℝ d ; and the index y runs over a probability space (Y, Σ(Y), p). Asymptotic properties of the sequences (a) and (b), of the Markov operator P, are closely connected with the asymptotic properties of the sequence of random vectors x j = φ ξ j ( x j − 1 ) $x_{j}=\varphi_{\xi_{j}}(x_{j-1})$ for j = 1,2, . . .,where { ξ j } j = 1 ∞ $\{\xi_{j}\}_{j=1}^{\infty}$ is a sequence of Y-valued independent random elements with common probability distribution p. An operator-theoretic analogue of Rényi’s Condition is introduced for the family {Py } y∈Y of the Frobenius-Perron operators. It is proved that under some additional assumptions this condition implies the L 1- convergence of the sequences (a) and (b) to a unique g 0 ∈ G. The general result is applied to some families {φy } y∈Y of smooth Markov maps in ℝ d .


2021 ◽  
Vol 10 (1) ◽  
pp. 972-981
Author(s):  
Peter Bugiel ◽  
Stanisław Wędrychowicz ◽  
Beata Rzepka

Abstract Existence of fixed point of a Frobenius-Perron type operator P : L1 ⟶ L1 generated by a family {φy}y∈Y of nonsingular Markov maps defined on a σ-finite measure space (I, Σ, m) is studied. Two fairly general conditions are established and it is proved that they imply for any g ∈ G = {f ∈ L1 : f ≥ 0, and ∥f∥ = 1}, the convergence (in the norm of L1) of the sequence $\begin{array}{} \{P^{j}g\}_{j = 1}^{\infty} \end{array} $ to a unique fixed point g0. The general result is applied to a family of C1+α-smooth Markov maps in ℝd.


2020 ◽  
pp. 1-15
Author(s):  
NAZIFE ERKURŞUN-ÖZCAN ◽  
FARRUKH MUKHAMEDOV

Abstract In the present paper, we deal with asymptotical stability of Markov operators acting on abstract state spaces (i.e. an ordered Banach space, where the norm has an additivity property on the cone of positive elements). Basically, we are interested in the rate of convergence when a Markov operator T satisfies the uniform P-ergodicity, i.e. $\|T^n-P\|\to 0$ , here P is a projection. We have showed that T is uniformly P-ergodic if and only if $\|T^n-P\|\leq C\beta^n$ , $0<\beta<1$ . In this paper, we prove that such a β is characterized by the spectral radius of T − P. Moreover, we give Deoblin’s kind of conditions for the uniform P-ergodicity of Markov operators.


Author(s):  
Igor B. Shubinskiy ◽  
Leonid A. Baranov ◽  
Aleksey M. Zamyshliaev

Now the scientific methodology is created, the theory and practice of the analysis and synthesis of functional safety of responsible electronic programmable devices and systems at all stages of their life cycle are developed. The basics of the methodology are fixed by standards. Methods of analysis and synthesis of functional safety are strictly formalized. They are based on the calculations of functional safety indicators with respect to failures of constituent elements and, especially, dangerous and protective failures of the system. Known methods of calculation are focused on determining the intensity and probability of dangerous failures. The objective of the proposed method lies in the fact that, in graph form, without resorting to the solution of the system of equations in the operator transformations to establish the distribution function of time until the threat or security failure, or any unhealthy condition of the system. These distribution functions determine all the necessary indicators of mean time (and, if necessary, the variance of this time) to a dangerous or protective failure. The proposed semi-Markov (Markov) operator method allows to solve a number of problems of calculation and prediction of functional safety of critical (responsible) systems. The method is formalized and suitable for subsequent computer implementation. This fact testifies to the expediency of further development of graph methods, convenient for the study of the safety of complex critical systems, devoid of the shortcomings of the proposed method in terms of the complexity of the preparatory work to determine the analytical expressions of transition probabilities in the Laplace - Stieltjes transformations. The given example of using the method has an independent value – it allows you to assess the advantages and disadvantages of ensuring functional safety by building a two-channel system without restarting the channels


2020 ◽  
Vol 30 (03) ◽  
pp. 2050046
Author(s):  
Congming Jin ◽  
Jiu Ding

We present a rigorous convergence analysis of a linear spline Markov finite approximation method for computing stationary densities of random maps with position dependent probabilities, which consist of several chaotic maps. The whole analysis is based on a new Lasota–Yorke-type inequality for the Markov operator associated with the random map, which is better than the previous one in the literature and much simpler to obtain. We also present numerical results to support our theoretical analysis.


Author(s):  
Carlo Pandiscia

In this work, we propose a method to investigate the factorization property of a adjontable Markov operator between two algebraic probability spaces without using the dilation theory. Assuming the existence of an anti-unitary operator on Hilbert space related to Stinespring representations of our Markov operator, which satisfy some particular modular relations, we prove that it admits a factorization. The method is tested on the two typologies of maps which we know admits a factorization, the Markov operators between commutative probability spaces and adjontable homomorphism. Subsequently, we apply these methods to particular adjontable Markov operator between matrix algebra which fixes the diagonal.


2019 ◽  
Vol 13 (1) ◽  
pp. 1790-1822
Author(s):  
Qian Qin ◽  
James P. Hobert ◽  
Kshitij Khare

Nonlinearity ◽  
2018 ◽  
Vol 31 (5) ◽  
pp. 1782-1806 ◽  
Author(s):  
Lorenzo J Díaz ◽  
Edgar Matias

Author(s):  
M. Saburov

A linear Markov chain is a discrete time stochastic process whose transitions depend only on the current state of the process. A nonlinear Markov chain is a discrete time stochastic process whose transitions may depend on both the current state and the current distribution of the process. These processes arise naturally in the study of the limit behavior of a large number of weakly interacting Markov processes. The nonlinear Markov processes were introduced by McKean and have been extensively studied in the context of nonlinear Chapman-Kolmogorov equations as well as nonlinear Fokker-Planck equations. The nonlinear Markov chain over a finite state space can be identified by a continuous mapping (a nonlinear Markov operator) defined on a set of all probability distributions (which is a simplex) of the finite state space and by a family of transition matrices depending on occupation probability distributions of states. Particularly, a linear Markov operator is a linear operator associated with a square stochastic matrix. It is well-known that a linear Markov operator is a surjection of the simplex if and only if it is a bijection. The similar problem was open for a nonlinear Markov operator associated with a stochastic hyper-matrix. We solve it in this paper. Namely, we show that a nonlinear Markov operator associated with a stochastic hyper-matrix is a surjection of the simplex if and only if it is a permutation of the Lotka-Volterra operator.


Sign in / Sign up

Export Citation Format

Share Document