scholarly journals A random walk on the rook placements on a Ferrer's board

10.37236/1284 ◽  
1996 ◽  
Vol 3 (2) ◽  
Author(s):  
Phil Hanlon

Let $B$ be a Ferrers board, i.e., the board obtained by removing the Ferrers diagram of a partition from the top right corner of an $n\times n$ chessboard. We consider a Markov chain on the set $R$ of rook placements on $B$ in which you can move from one placement to any other legal placement obtained by switching the columns in which two rooks sit. We give sharp estimates for the rate of convergence of this Markov chain using spectral methods. As part of this analysis we give a complete combinatorial description of the eigenvalues of the transition matrix for this chain. We show that two extremes cases of this Markov chain correspond to random walks on groups which are analyzed in the literature. Our estimates for rates of convergence interpolate between those two results.

2020 ◽  
Vol 02 (01) ◽  
pp. 2050004
Author(s):  
Je-Young Choi

Several methods have been developed in order to solve electrical circuits consisting of resistors and an ideal voltage source. A correspondence with random walks avoids difficulties caused by choosing directions of currents and signs in potential differences. Starting from the random-walk method, we introduce a reduced transition matrix of the associated Markov chain whose dominant eigenvector alone determines the electric potentials at all nodes of the circuit and the equivalent resistance between the nodes connected to the terminals of the voltage source. Various means to find the eigenvector are developed from its definition. A few example circuits are solved in order to show the usefulness of the present approach.


2014 ◽  
Vol 14 (3) ◽  
pp. 451-491
Author(s):  
Gilles Lebeau ◽  
Laurent Michel

We study the spectral theory of a reversible Markov chain This random walk depends on a parameter $h\in ]0,h_{0}]$ which is roughly the size of each step of the walk. We prove uniform bounds with respect to $h$ on the rate of convergence to equilibrium, and the convergence when $h\rightarrow 0$ to the associated hypoelliptic diffusion.


2011 ◽  
Vol 43 (3) ◽  
pp. 782-813 ◽  
Author(s):  
M. Jara ◽  
T. Komorowski

In this paper we consider the scaled limit of a continuous-time random walk (CTRW) based on a Markov chain {Xn,n≥ 0} and two observables, τ(∙) andV(∙), corresponding to the renewal times and jump sizes. Assuming that these observables belong to the domains of attraction of some stable laws, we give sufficient conditions on the chain that guarantee the existence of the scaled limits for CTRWs. An application of the results to a process that arises in quantum transport theory is provided. The results obtained in this paper generalize earlier results contained in Becker-Kern, Meerschaert and Scheffler (2004) and Meerschaert and Scheffler (2008), and the recent results of Henry and Straka (2011) and Jurlewicz, Kern, Meerschaert and Scheffler (2010), where {Xn,n≥ 0} is a sequence of independent and identically distributed random variables.


2010 ◽  
Vol 10 (5&6) ◽  
pp. 509-524
Author(s):  
M. Mc Gettrick

We investigate the quantum versions of a one-dimensional random walk, whose corresponding Markov Chain is of order 2. This corresponds to the walk having a memory of one previous step. We derive the amplitudes and probabilities for these walks, and point out how they differ from both classical random walks, and quantum walks without memory.


2010 ◽  
Vol 10 (5&6) ◽  
pp. 420-434
Author(s):  
C.-F. Chiang ◽  
D. Nagaj ◽  
P. Wocjan

We present an efficient general method for realizing a quantum walk operator corresponding to an arbitrary sparse classical random walk. Our approach is based on Grover and Rudolph's method for preparing coherent versions of efficiently integrable probability distributions \cite{GroverRudolph}. This method is intended for use in quantum walk algorithms with polynomial speedups, whose complexity is usually measured in terms of how many times we have to apply a step of a quantum walk \cite{Szegedy}, compared to the number of necessary classical Markov chain steps. We consider a finer notion of complexity including the number of elementary gates it takes to implement each step of the quantum walk with some desired accuracy. The difference in complexity for various implementation approaches is that our method scales linearly in the sparsity parameter and poly-logarithmically with the inverse of the desired precision. The best previously known general methods either scale quadratically in the sparsity parameter, or polynomially in the inverse precision. Our approach is especially relevant for implementing quantum walks corresponding to classical random walks like those used in the classical algorithms for approximating permanents \cite{Vigoda, Vazirani} and sampling from binary contingency tables \cite{Stefankovi}. In those algorithms, the sparsity parameter grows with the problem size, while maintaining high precision is required.


2019 ◽  
Vol 19 (02) ◽  
pp. 2050023 ◽  
Author(s):  
Paula Cadavid ◽  
Mary Luz Rodiño Montoya ◽  
Pablo M. Rodriguez

Evolution algebras are a new type of non-associative algebras which are inspired from biological phenomena. A special class of such algebras, called Markov evolution algebras, is strongly related to the theory of discrete time Markov chains. The winning of this relation is that many results coming from Probability Theory may be stated in the context of Abstract Algebra. In this paper, we explore the connection between evolution algebras, random walks and graphs. More precisely, we study the relationships between the evolution algebra induced by a random walk on a graph and the evolution algebra determined by the same graph. Given that any Markov chain may be seen as a random walk on a graph, we believe that our results may add a new landscape in the study of Markov evolution algebras.


2009 ◽  
Vol 2009 ◽  
pp. 1-4 ◽  
Author(s):  
José Luis Palacios

Using classical arguments we derive a formula for the moments of hitting times for an ergodic Markov chain. We apply this formula to the case of simple random walk on trees and show, with an elementary electric argument, that all the moments are natural numbers.


2001 ◽  
Vol 38 (1) ◽  
pp. 262-269 ◽  
Author(s):  
Geoffrey Pritchard ◽  
David J. Scott

We consider the problem of estimating the rate of convergence to stationarity of a continuous-time, finite-state Markov chain. This is done via an estimator of the second-largest eigenvalue of the transition matrix, which in turn is based on conventional inference in a parametric model. We obtain a limiting distribution for the eigenvalue estimator. As an example we treat an M/M/c/c queue, and show that the method allows us to estimate the time to stationarity τ within a time comparable to τ.


2019 ◽  
Vol 29 (03) ◽  
pp. 561-580
Author(s):  
Svetlana Poznanović ◽  
Kara Stasikelis

The Tsetlin library is a very well-studied model for the way an arrangement of books on a library shelf evolves over time. One of the most interesting properties of this Markov chain is that its spectrum can be computed exactly and that the eigenvalues are linear in the transition probabilities. In this paper, we consider a generalization which can be interpreted as a self-organizing library in which the arrangements of books on each shelf are restricted to be linear extensions of a fixed poset. The moves on the books are given by the extended promotion operators of Ayyer, Klee and Schilling while the shelves, bookcases, etc. evolve according to the move-to-back moves as in the the self-organizing library of Björner. We show that the eigenvalues of the transition matrix of this Markov chain are [Formula: see text] integer combinations of the transition probabilities if the posets that prescribe the restrictions on the book arrangements are rooted forests or more generally, if they consist of ordinal sums of a rooted forest and so called ladders. For some of the results, we show that the monoids generated by the moves are either [Formula: see text]-trivial or, more generally, in [Formula: see text] and then we use the theory of left random walks on the minimal ideal of such monoids to find the eigenvalues. Moreover, in order to give a combinatorial description of the eigenvalues in the more general case, we relate the eigenvalues when the restrictions on the book arrangements change only by allowing for one additional transposition of two fixed books.


2008 ◽  
Vol 40 (01) ◽  
pp. 206-228 ◽  
Author(s):  
Alex Iksanov ◽  
Martin Möhle

LetS0:= 0 andSk:=ξ1+ ··· +ξkfork∈ ℕ := {1, 2, …}, where {ξk:k∈ ℕ} are independent copies of a random variableξwith values in ℕ and distributionpk:= P{ξ=k},k∈ ℕ. We interpret the random walk {Sk:k= 0, 1, 2, …} as a particle jumping to the right through integer positions. Fixn∈ ℕ and modify the process by requiring that the particle is bumped back to its current state each time a jump would bring the particle to a state larger than or equal ton. This constraint defines an increasing Markov chain {Rk(n):k= 0, 1, 2, …} which never reaches the staten. We call this process a random walk with barriern. LetMndenote the number of jumps of the random walk with barriern. This paper focuses on the asymptotics ofMnasntends to ∞. A key observation is that, underp1> 0, {Mn:n∈ ℕ} satisfies the distributional recursionM1= 0 andforn= 2, 3, …, whereInis independent ofM2, …,Mn−1with distribution P{In=k} =pk/ (p1+ ··· +pn−1),k∈ {1, …,n− 1}. Depending on the tail behavior of the distribution ofξ, several scalings forMnand corresponding limiting distributions come into play, including stable distributions and distributions of exponential integrals of subordinators. The methods used in this paper are mainly probabilistic. The key tool is to compare (couple) the number of jumps,Mn, with the first time,Nn, when the unrestricted random walk {Sk:k= 0, 1, …} reaches a state larger than or equal ton. The results are applied to derive the asymptotics of the number of collision events (that take place until there is just a single block) forβ(a,b)-coalescent processes with parameters 0 <a< 2 andb= 1.


Sign in / Sign up

Export Citation Format

Share Document