scholarly journals Capturing the Drunk Robber on a Graph

10.37236/3398 ◽  
2014 ◽  
Vol 21 (3) ◽  
Author(s):  
Natasha Komarov ◽  
Peter Winkler

We show that the expected time for a smart "cop"' to catch a drunk "robber" on an $n$-vertex graph is at most $n + {\rm o}(n)$. More precisely, let $G$ be a simple, connected, undirected graph with distinguished points $u$ and $v$ among its $n$ vertices. A cop begins at $u$ and a robber at $v$; they move alternately from vertex to adjacent vertex. The robber moves randomly, according to a simple random walk on $G$; the cop sees all and moves as she wishes, with the object of "capturing" the robber—that is, occupying the same vertex—in least expected time. We show that the cop succeeds in expected time no more than $n {+} {\rm o}(n)$. Since there are graphs in which capture time is at least $n {-} o(n)$, this is roughly best possible. We note also that no function of the diameter can be a bound on capture time.

Algorithms ◽  
2018 ◽  
Vol 11 (10) ◽  
pp. 149 ◽  
Author(s):  
Ioannis Lamprou ◽  
Russell Martin ◽  
Paul Spirakis

We define a general model of stochastically-evolving graphs, namely the edge-uniform stochastically-evolving graphs. In this model, each possible edge of an underlying general static graph evolves independently being either alive or dead at each discrete time step of evolution following a (Markovian) stochastic rule. The stochastic rule is identical for each possible edge and may depend on the past k ≥ 0 observations of the edge’s state. We examine two kinds of random walks for a single agent taking place in such a dynamic graph: (i) The Random Walk with a Delay (RWD), where at each step, the agent chooses (uniformly at random) an incident possible edge, i.e., an incident edge in the underlying static graph, and then, it waits till the edge becomes alive to traverse it. (ii) The more natural Random Walk on what is Available (RWA), where the agent only looks at alive incident edges at each time step and traverses one of them uniformly at random. Our study is on bounding the cover time, i.e., the expected time until each node is visited at least once by the agent. For RWD, we provide a first upper bound for the cases k = 0 , 1 by correlating RWD with a simple random walk on a static graph. Moreover, we present a modified electrical network theory capturing the k = 0 case. For RWA, we derive some first bounds for the case k = 0 , by reducing RWA to an RWD-equivalent walk with a modified delay. Further, we also provide a framework that is shown to compute the exact value of the cover time for a general family of stochastically-evolving graphs in exponential time. Finally, we conduct experiments on the cover time of RWA in edge-uniform graphs and compare the experimental findings with our theoretical bounds.


2016 ◽  
Vol 31 ◽  
pp. 444-464 ◽  
Author(s):  
Steve Kirkland ◽  
Ze Zeng

Given an irreducible stochastic matrix M, Kemeny’s constant K(M) measures the expected time for the corresponding Markov chain to transition from any given initial state to a randomly chosen final state. A combinatorially based expression for K(M) is provided in terms of the weights of certain directed forests in a directed graph associated with M, yielding a particularly simple expression in the special case that M is the transition matrix for a random walk on a tree. An analogue of Braess’ paradox is investigated, whereby inserting an edge into an undirected graph can increase the value of Kemeny’s constant for the corresponding random walk. It is shown in particular that for almost all trees, there is an edge whose insertion increases the corresponding value of Kemeny’s constant. Finally, it is proven that for any m ∈ N, almost every tree T has the property that there are at least m trees, none of which are isomorphic to T , such that the values of Kemeny’s constant for the corresponding random walks coincide with the value of Kemeny’s constant for the random walk on T . Several illustrative examples are included.


1976 ◽  
Vol 13 (02) ◽  
pp. 355-356 ◽  
Author(s):  
Aidan Sudbury

Particles are situated on a rectangular lattice and proceed to invade each other's territory. When they are equally competitive this creates larger and larger blocks of one type as time goes by. It is shown that the expected size of such blocks is equal to the expected range of a simple random walk.


1996 ◽  
Vol 33 (1) ◽  
pp. 122-126
Author(s):  
Torgny Lindvall ◽  
L. C. G. Rogers

The use of Mineka coupling is extended to a case with a continuous state space: an efficient coupling of random walks S and S' in can be made such that S' — S is virtually a one-dimensional simple random walk. This insight settles a zero-two law of ergodicity. One more proof of Blackwell's renewal theorem is also presented.


2021 ◽  
Author(s):  
Thi Thi Zin ◽  
Pyke Tin ◽  
Pann Thinzar Seint ◽  
Kosuke Sumi ◽  
Ikuo Kobayashi ◽  
...  

2010 ◽  
Vol 149 (2) ◽  
pp. 351-372
Author(s):  
WOUTER KAGER ◽  
LIONEL LEVINE

AbstractInternal diffusion-limited aggregation is a growth model based on random walk in ℤd. We study how the shape of the aggregate depends on the law of the underlying walk, focusing on a family of walks in ℤ2 for which the limiting shape is a diamond. Certain of these walks—those with a directional bias toward the origin—have at most logarithmic fluctuations around the limiting shape. This contrasts with the simple random walk, where the limiting shape is a disk and the best known bound on the fluctuations, due to Lawler, is a power law. Our walks enjoy a uniform layering property which simplifies many of the proofs.


1992 ◽  
Vol 29 (02) ◽  
pp. 305-312 ◽  
Author(s):  
W. Katzenbeisser ◽  
W. Panny

Let Qn denote the number of times where a simple random walk reaches its maximum, where the random walk starts at the origin and returns to the origin after 2n steps. Such random walks play an important role in probability and statistics. In this paper the distribution and the moments of Qn , are considered and their asymptotic behavior is studied.


Sign in / Sign up

Export Citation Format

Share Document