transition kernel
Recently Published Documents


TOTAL DOCUMENTS

26
(FIVE YEARS 4)

H-INDEX

5
(FIVE YEARS 0)

Mathematics ◽  
2021 ◽  
Vol 9 (22) ◽  
pp. 2845
Author(s):  
Sandra Fortini ◽  
Sonia Petrone ◽  
Hristo Sariev

Measure-valued Pólya urn processes (MVPP) are Markov chains with an additive structure that serve as an extension of the generalized k-color Pólya urn model towards a continuum of possible colors. We prove that, for any MVPP (μn)n≥0 on a Polish space X, the normalized sequence (μn/μn(X))n≥0 agrees with the marginal predictive distributions of some random process (Xn)n≥1. Moreover, μn=μn−1+RXn, n≥1, where x↦Rx is a random transition kernel on X; thus, if μn−1 represents the contents of an urn, then Xn denotes the color of the ball drawn with distribution μn−1/μn−1(X) and RXn—the subsequent reinforcement. In the case RXn=WnδXn, for some non-negative random weights W1,W2,…, the process (Xn)n≥1 is better understood as a randomly reinforced extension of Blackwell and MacQueen’s Pólya sequence. We study the asymptotic properties of the predictive distributions and the empirical frequencies of (Xn)n≥1 under different assumptions on the weights. We also investigate a generalization of the above models via a randomization of the law of the reinforcement.


2020 ◽  
Vol 29 (4) ◽  
pp. 508-536
Author(s):  
Michael C. H. Choi ◽  
Pierre Patie

AbstractIn this paper we develop an in-depth analysis of non-reversible Markov chains on denumerable state space from a similarity orbit perspective. In particular, we study the class of Markov chains whose transition kernel is in the similarity orbit of a normal transition kernel, such as that of birth–death chains or reversible Markov chains. We start by identifying a set of sufficient conditions for a Markov chain to belong to the similarity orbit of a birth–death chain. As by-products, we obtain a spectral representation in terms of non-self-adjoint resolutions of identity in the sense of Dunford [21] and offer a detailed analysis on the convergence rate, separation cutoff and L2-cutoff of this class of non-reversible Markov chains. We also look into the problem of estimating the integral functionals from discrete observations for this class. In the last part of this paper we investigate a particular similarity orbit of reversible Markov kernels, which we call the pure birth orbit, and analyse various possibly non-reversible variants of classical birth–death processes in this orbit.


Author(s):  
Yang Zhao ◽  
Jianyi Zhang ◽  
Changyou Chen

Scalable Bayesian sampling is playing an important role in modern machine learning, especially in the fast-developed unsupervised-(deep)-learning models. While tremendous progresses have been achieved via scalable Bayesian sampling such as stochastic gradient MCMC (SG-MCMC) and Stein variational gradient descent (SVGD), the generated samples are typically highly correlated. Moreover, their sample-generation processes are often criticized to be inefficient. In this paper, we propose a novel self-adversarial learning framework that automatically learns a conditional generator to mimic the behavior of a Markov kernel (transition kernel). High-quality samples can be efficiently generated by direct forward passes though a learned generator. Most importantly, the learning process adopts a self-learning paradigm, requiring no information on existing Markov kernels, e.g., knowledge of how to draw samples from them. Specifically, our framework learns to use current samples, either from the generator or pre-provided training data, to update the generator such that the generated samples progressively approach a target distribution, thus it is called self-learning. Experiments on both synthetic and real datasets verify advantages of our framework, outperforming related methods in terms of both sampling efficiency and sample quality.


2016 ◽  
Vol 48 (2) ◽  
pp. 369-391 ◽  
Author(s):  
Jérôme Casse

Abstract This paper is devoted to probabilistic cellular automata (PCAs) on N,Z or Z / nZ, depending on two neighbors with a general alphabet E (finite or infinite, discrete or not). We study the following question: under which conditions does a PCA possess a Markov chain as an invariant distribution? Previous results in the literature give some conditions on the transition matrix (for positive rate PCAs) when the alphabet E is finite. Here we obtain conditions on the transition kernel of a PCA with a general alphabet E. In particular, we show that the existence of an invariant Markov chain is equivalent to the existence of a solution to a cubic integral equation. One of the difficulties in passing from a finite alphabet to a general alphabet comes from the problem of measurability, and a large part of this work is devoted to clarifying these issues.


2014 ◽  
Vol 46 (04) ◽  
pp. 1036-1058 ◽  
Author(s):  
Loïc Hervé ◽  
James Ledoux

Let {Xn}n∈ℕbe a Markov chain on a measurable spacewith transition kernelP, and letThe Markov kernelPis here considered as a linear bounded operator on the weighted-supremum spaceassociated withV. Then the combination of quasicompactness arguments with precise analysis of eigenelements ofPallows us to estimate the geometric rate of convergence ρV(P) of {Xn}n∈ℕto its invariant probability measure in operator norm onA general procedure to compute ρV(P) for discrete Markov random walks with identically distributed bounded increments is specified.


2014 ◽  
Vol 46 (4) ◽  
pp. 1036-1058
Author(s):  
Loïc Hervé ◽  
James Ledoux

Let {Xn}n∈ℕ be a Markov chain on a measurable space with transition kernel P, and let The Markov kernel P is here considered as a linear bounded operator on the weighted-supremum space associated with V. Then the combination of quasicompactness arguments with precise analysis of eigenelements of P allows us to estimate the geometric rate of convergence ρV(P) of {Xn}n∈ℕ to its invariant probability measure in operator norm on A general procedure to compute ρV(P) for discrete Markov random walks with identically distributed bounded increments is specified.


2013 ◽  
Vol 45 (1) ◽  
pp. 186-213 ◽  
Author(s):  
Sidney I. Resnick ◽  
David Zeber

An asymptotic model for the extreme behavior of certain Markov chains is the ‘tail chain’. Generally taking the form of a multiplicative random walk, it is useful in deriving extremal characteristics, such as point process limits. We place this model in a more general context, formulated in terms of extreme value theory for transition kernels, and extend it by formalizing the distinction between extreme and nonextreme states. We make the link between the update function and transition kernel forms considered in previous work, and we show that the tail chain model leads to a multivariate regular variation property of the finite-dimensional distributions under assumptions on the marginal tails alone.


2013 ◽  
Vol 45 (01) ◽  
pp. 186-213 ◽  
Author(s):  
Sidney I. Resnick ◽  
David Zeber

An asymptotic model for the extreme behavior of certain Markov chains is the ‘tail chain’. Generally taking the form of a multiplicative random walk, it is useful in deriving extremal characteristics, such as point process limits. We place this model in a more general context, formulated in terms of extreme value theory for transition kernels, and extend it by formalizing the distinction between extreme and nonextreme states. We make the link between the update function and transition kernel forms considered in previous work, and we show that the tail chain model leads to a multivariate regular variation property of the finite-dimensional distributions under assumptions on the marginal tails alone.


Sign in / Sign up

Export Citation Format

Share Document