scholarly journals Limit profiles for reversible Markov chains

Author(s):  
Evita Nestoridi ◽  
Sam Olesker-Taylor

AbstractIn a recent breakthrough, Teyssier (Ann Probab 48(5):2323–2343, 2020) introduced a new method for approximating the distance from equilibrium of a random walk on a group. He used it to study the limit profile for the random transpositions card shuffle. His techniques were restricted to conjugacy-invariant random walks on groups; we derive similar approximation lemmas for random walks on homogeneous spaces and for general reversible Markov chains. We illustrate applications of these lemmas to some famous problems: the k-cycle shuffle, sharpening results of Hough (Probab Theory Relat Fields 165(1–2):447–482, 2016) and Berestycki, Schramm and Zeitouni (Ann Probab 39(5):1815–1843, 2011), the Ehrenfest urn diffusion with many urns, sharpening results of Ceccherini-Silberstein, Scarabotti and Tolli  (J Math Sci 141(2):1182–1229, 2007), a Gibbs sampler, which is a fundamental tool in statistical physics, with Binomial prior and hypergeometric posterior, sharpening results of Diaconis, Khare and Saloff-Coste (Stat Sci 23(2):151–178, 2008).

2021 ◽  
Vol 9 ◽  
Author(s):  
Werner Krauth

This review treats the mathematical and algorithmic foundations of non-reversible Markov chains in the context of event-chain Monte Carlo (ECMC), a continuous-time lifted Markov chain that employs the factorized Metropolis algorithm. It analyzes a number of model applications and then reviews the formulation as well as the performance of ECMC in key models in statistical physics. Finally, the review reports on an ongoing initiative to apply ECMC to the sampling problem in molecular simulation, i.e., to real-world models of peptides, proteins, and polymers in aqueous solution.


2021 ◽  
Vol 18 (2 Jul-Dec) ◽  
pp. 020202
Author(s):  
J. Valentín Escobar

Several important statistical tools and concepts are covered in upper division undergraduate Statistical Physics courses, including those of random walks and the central limit theorem. However, some of their broad applicability tends to be missed by students as well as the connection between these and other physical concepts. In this work, we apply a 1D random walk to study the evolution of the probability that a candidate will win an election given she holds some lead over her opponent, and connect the result found to the concept of density of states and occupation probabilities. This paper is intended to serve as a guide to the Statistical Physics instructor who wishes to motivate students beyond the boundaries of the official syllabus.


2020 ◽  
Vol 12 (9) ◽  
pp. 151
Author(s):  
Alberto Baldi ◽  
Franco Bagnoli

Many games in which chance plays a role can be simulated as a random walk over a graph of possible configurations of board pieces, cards, dice or coins. The end of the game generally consists of the appearance of a predefined winning pattern; for random walks, this corresponds to an absorbing trap. The strategy of a player consist of betting on a given sequence, i.e., in placing a trap on the graph. In two-players games, the competition between strategies corresponds to the capabilities of the corresponding traps in capturing the random walks originated by the aleatory components of the game. The concept of dominance transitivity of strategies implies an advantage for the first player, who can choose the strategy that, at least statistically, wins. However, in some games, the second player is statistically advantaged, so these games are denoted “intransitive”. In an intransitive game, the second player can choose a location for his/her trap which captures more random walks than that of the first one. The transitivity concept can, therefore, be extended to generic random walks and in general to Markov chains. We analyze random walks on several kinds of networks (rings, scale-free, hierarchical and city-inspired) with many variations: traps can be partially absorbing, the walkers can be biased and the initial distribution can be arbitrary. We found that the transitivity concept can be quite useful for characterizing the combined properties of a graph and that of the walkers.


Mathematics ◽  
2021 ◽  
Vol 9 (10) ◽  
pp. 1148
Author(s):  
Jewgeni H. Dshalalow ◽  
Ryan T. White

In a classical random walk model, a walker moves through a deterministic d-dimensional integer lattice in one step at a time, without drifting in any direction. In a more advanced setting, a walker randomly moves over a randomly configured (non equidistant) lattice jumping a random number of steps. In some further variants, there is a limited access walker’s moves. That is, the walker’s movements are not available in real time. Instead, the observations are limited to some random epochs resulting in a delayed information about the real-time position of the walker, its escape time, and location outside a bounded subset of the real space. In this case we target the virtual first passage (or escape) time. Thus, unlike standard random walk problems, rather than crossing the boundary, we deal with the walker’s escape location arbitrarily distant from the boundary. In this paper, we give a short historical background on random walk, discuss various directions in the development of random walk theory, and survey most of our results obtained in the last 25–30 years, including the very recent ones dated 2020–21. Among different applications of such random walks, we discuss stock markets, stochastic networks, games, and queueing.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nikolaos Halidias

Abstract In this note we study the probability and the mean time for absorption for discrete time Markov chains. In particular, we are interested in estimating the mean time for absorption when absorption is not certain and connect it with some other known results. Computing a suitable probability generating function, we are able to estimate the mean time for absorption when absorption is not certain giving some applications concerning the random walk. Furthermore, we investigate the probability for a Markov chain to reach a set A before reach B generalizing this result for a sequence of sets A 1 , A 2 , … , A k {A_{1},A_{2},\dots,A_{k}} .


2014 ◽  
Vol 46 (02) ◽  
pp. 400-421 ◽  
Author(s):  
Daniela Bertacchi ◽  
Fabio Zucca

In this paper we study the strong local survival property for discrete-time and continuous-time branching random walks. We study this property by means of an infinite-dimensional generating functionGand a maximum principle which, we prove, is satisfied by every fixed point ofG. We give results for the existence of a strong local survival regime and we prove that, unlike local and global survival, in continuous time, strong local survival is not a monotone property in the general case (though it is monotone if the branching random walk is quasitransitive). We provide an example of an irreducible branching random walk where the strong local property depends on the starting site of the process. By means of other counterexamples, we show that the existence of a pure global phase is not equivalent to nonamenability of the process, and that even an irreducible branching random walk with the same branching law at each site may exhibit nonstrong local survival. Finally, we show that the generating function of an irreducible branching random walk can have more than two fixed points; this disproves a previously known result.


1996 ◽  
Vol 33 (1) ◽  
pp. 122-126
Author(s):  
Torgny Lindvall ◽  
L. C. G. Rogers

The use of Mineka coupling is extended to a case with a continuous state space: an efficient coupling of random walks S and S' in can be made such that S' — S is virtually a one-dimensional simple random walk. This insight settles a zero-two law of ergodicity. One more proof of Blackwell's renewal theorem is also presented.


Sign in / Sign up

Export Citation Format

Share Document