scholarly journals Random sampling of lattice paths with constraints, via transportation

2010 ◽  
Vol DMTCS Proceedings vol. AM,... (Proceedings) ◽  
Author(s):  
Lucas Gerin

International audience We build and analyze in this paper Markov chains for the random sampling of some one-dimensional lattice paths with constraints, for various constraints. These chains are easy to implement, and sample an "almost" uniform path of length $n$ in $n^{3+\epsilon}$ steps. This bound makes use of a certain $\textit{contraction property}$ of the Markov chain, and is proved with an approach inspired by optimal transport.

2006 ◽  
Vol Vol. 8 ◽  
Author(s):  
R. Balasubramanian ◽  
C.R. Subramanian

International audience We study the problem of efficiently sampling k-colorings of bipartite graphs. We show that a class of markov chains cannot be used as efficient samplers. Precisely, we show that, for any k, 6 ≤ k ≤ n^\1/3-ε \, ε > 0 fixed, \emphalmost every bipartite graph on n+n vertices is such that the mixing time of any markov chain asymptotically uniform on its k-colorings is exponential in n/k^2 (if it is allowed to only change the colors of O(n/k) vertices in a single transition step). This kind of exponential time mixing is called \emphtorpid mixing. As a corollary, we show that there are (for every n) bipartite graphs on 2n vertices with Δ (G) = Ω (\ln n) such that for every k, 6 ≤ k ≤ Δ /(6 \ln Δ ), each member of a large class of chains mixes torpidly. While, for fixed k, such negative results are implied by the work of CDF, our results are more general in that they allow k to grow with n. We also show that these negative results hold true for H-colorings of bipartite graphs provided H contains a spanning complete bipartite subgraph. We also present explicit examples of colorings (k-colorings or H-colorings) which admit 1-cautious chains that are ergodic and are shown to have exponential mixing time. While, for fixed k or fixed H, such negative results are implied by the work of CDF, our results are more general in that they allow k or H to vary with n.


2012 ◽  
Vol DMTCS Proceedings vol. AQ,... (Proceedings) ◽  
Author(s):  
Sarah Miracle ◽  
Dana Randall ◽  
Amanda Pascoe Streib ◽  
Prasad Tetali

International audience Given a planar triangulation, a 3-orientation is an orientation of the internal edges so all internal vertices have out-degree three. Each 3-orientation gives rise to a unique edge coloring known as a $\textit{Schnyder wood}$ that has proven useful for various computing and combinatorics applications. We consider natural Markov chains for sampling uniformly from the set of 3-orientations. First, we study a "triangle-reversing'' chain on the space of 3-orientations of a fixed triangulation that reverses the orientation of the edges around a triangle in each move. We show that (i) when restricted to planar triangulations of maximum degree six, the Markov chain is rapidly mixing, and (ii) there exists a triangulation with high degree on which this Markov chain mixes slowly. Next, we consider an "edge-flipping'' chain on the larger state space consisting of 3-orientations of all planar triangulations on a fixed number of vertices. It was also shown previously that this chain connects the state space and we prove that the chain is always rapidly mixing.


1990 ◽  
Vol 27 (03) ◽  
pp. 545-556 ◽  
Author(s):  
S. Kalpazidou

The asymptotic behaviour of the sequence (𝒞 n (ω), wc,n (ω)/n), is studied where 𝒞 n (ω) is the class of all cycles c occurring along the trajectory ωof a recurrent strictly stationary Markov chain (ξ n ) until time n and wc,n (ω) is the number of occurrences of the cycle c until time n. The previous sequence of sample weighted classes converges almost surely to a class of directed weighted cycles (𝒞∞, ω c ) which represents uniquely the chain (ξ n ) as a circuit chain, and ω c is given a probabilistic interpretation.


2021 ◽  
Vol 53 (2) ◽  
pp. 335-369
Author(s):  
Christian Meier ◽  
Lingfei Li ◽  
Gongqiu Zhang

AbstractWe develop a continuous-time Markov chain (CTMC) approximation of one-dimensional diffusions with sticky boundary or interior points. Approximate solutions to the action of the Feynman–Kac operator associated with a sticky diffusion and first passage probabilities are obtained using matrix exponentials. We show how to compute matrix exponentials efficiently and prove that a carefully designed scheme achieves second-order convergence. We also propose a scheme based on CTMC approximation for the simulation of sticky diffusions, for which the Euler scheme may completely fail. The efficiency of our method and its advantages over alternative approaches are illustrated in the context of bond pricing in a sticky short-rate model for a low-interest environment and option pricing under a geometric Brownian motion price model with a sticky interior point.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nikolaos Halidias

Abstract In this note we study the probability and the mean time for absorption for discrete time Markov chains. In particular, we are interested in estimating the mean time for absorption when absorption is not certain and connect it with some other known results. Computing a suitable probability generating function, we are able to estimate the mean time for absorption when absorption is not certain giving some applications concerning the random walk. Furthermore, we investigate the probability for a Markov chain to reach a set A before reach B generalizing this result for a sequence of sets A 1 , A 2 , … , A k {A_{1},A_{2},\dots,A_{k}} .


2021 ◽  
Author(s):  
Andrea Marin ◽  
Carla Piazza ◽  
Sabina Rossi

AbstractIn this paper, we deal with the lumpability approach to cope with the state space explosion problem inherent to the computation of the stationary performance indices of large stochastic models. The lumpability method is based on a state aggregation technique and applies to Markov chains exhibiting some structural regularity. Moreover, it allows one to efficiently compute the exact values of the stationary performance indices when the model is actually lumpable. The notion of quasi-lumpability is based on the idea that a Markov chain can be altered by relatively small perturbations of the transition rates in such a way that the new resulting Markov chain is lumpable. In this case, only upper and lower bounds on the performance indices can be derived. Here, we introduce a novel notion of quasi-lumpability, named proportional lumpability, which extends the original definition of lumpability but, differently from the general definition of quasi-lumpability, it allows one to derive exact stationary performance indices for the original process. We then introduce the notion of proportional bisimilarity for the terms of the performance process algebra PEPA. Proportional bisimilarity induces a proportional lumpability on the underlying continuous-time Markov chains. Finally, we prove some compositionality results and show the applicability of our theory through examples.


Sign in / Sign up

Export Citation Format

Share Document