scholarly journals Markov chains, ${\mathscr R}$-trivial monoids and representation theory

2015 ◽  
Vol 25 (01n02) ◽  
pp. 169-231 ◽  
Author(s):  
Arvind Ayyer ◽  
Anne Schilling ◽  
Benjamin Steinberg ◽  
Nicolas M. Thiéry

We develop a general theory of Markov chains realizable as random walks on [Formula: see text]-trivial monoids. It provides explicit and simple formulas for the eigenvalues of the transition matrix, for multiplicities of the eigenvalues via Möbius inversion along a lattice, a condition for diagonalizability of the transition matrix and some techniques for bounding the mixing time. In addition, we discuss several examples, such as Toom–Tsetlin models, an exchange walk for finite Coxeter groups, as well as examples previously studied by the authors, such as nonabelian sandpile models and the promotion Markov chain on posets. Many of these examples can be viewed as random walks on quotients of free tree monoids, a new class of monoids whose combinatorics we develop.

1998 ◽  
Vol 35 (03) ◽  
pp. 517-536 ◽  
Author(s):  
R. L. Tweedie

Let P be the transition matrix of a positive recurrent Markov chain on the integers, with invariant distribution π. If (n) P denotes the n x n ‘northwest truncation’ of P, it is known that approximations to π(j)/π(0) can be constructed from (n) P, but these are known to converge to the probability distribution itself in special cases only. We show that such convergence always occurs for three further general classes of chains, geometrically ergodic chains, stochastically monotone chains, and those dominated by stochastically monotone chains. We show that all ‘finite’ perturbations of stochastically monotone chains can be considered to be dominated by such chains, and thus the results hold for a much wider class than is first apparent. In the cases of uniformly ergodic chains, and chains dominated by irreducible stochastically monotone chains, we find practical bounds on the accuracy of the approximations.


2020 ◽  
Vol 02 (01) ◽  
pp. 2050004
Author(s):  
Je-Young Choi

Several methods have been developed in order to solve electrical circuits consisting of resistors and an ideal voltage source. A correspondence with random walks avoids difficulties caused by choosing directions of currents and signs in potential differences. Starting from the random-walk method, we introduce a reduced transition matrix of the associated Markov chain whose dominant eigenvector alone determines the electric potentials at all nodes of the circuit and the equivalent resistance between the nodes connected to the terminals of the voltage source. Various means to find the eigenvector are developed from its definition. A few example circuits are solved in order to show the usefulness of the present approach.


1987 ◽  
Vol 19 (03) ◽  
pp. 739-742 ◽  
Author(s):  
J. D. Biggins

If (non-overlapping) repeats of specified sequences of states in a Markov chain are considered, the result is a Markov renewal process. Formulae somewhat simpler than those given in Biggins and Cannings (1987) are derived which can be used to obtain the transition matrix and conditional mean sojourn times in this process.


2013 ◽  
Vol 50 (04) ◽  
pp. 918-930 ◽  
Author(s):  
Marie-Anne Guerry

When a discrete-time homogenous Markov chain is observed at time intervals that correspond to its time unit, then the transition probabilities of the chain can be estimated using known maximum likelihood estimators. In this paper we consider a situation when a Markov chain is observed on time intervals with length equal to twice the time unit of the Markov chain. The issue then arises of characterizing probability matrices whose square root(s) are also probability matrices. This characterization is referred to in the literature as the embedding problem for discrete time Markov chains. The probability matrix which has probability root(s) is called embeddable. In this paper for two-state Markov chains, necessary and sufficient conditions for embeddability are formulated and the probability square roots of the transition matrix are presented in analytic form. In finding conditions for the existence of probability square roots for (k x k) transition matrices, properties of row-normalized matrices are examined. Besides the existence of probability square roots, the uniqueness of these solutions is discussed: In the case of nonuniqueness, a procedure is introduced to identify a transition matrix that takes into account the specificity of the concrete context. In the case of nonexistence of a probability root, the concept of an approximate probability root is introduced as a solution of an optimization problem related to approximate nonnegative matrix factorization.


2013 ◽  
Vol 50 (4) ◽  
pp. 943-959 ◽  
Author(s):  
Guan-Yu Chen ◽  
Laurent Saloff-Coste

We make a connection between the continuous time and lazy discrete time Markov chains through the comparison of cutoffs and mixing time in total variation distance. For illustration, we consider finite birth and death chains and provide a criterion on cutoffs using eigenvalues of the transition matrix.


2019 ◽  
Vol 44 (3) ◽  
pp. 282-308 ◽  
Author(s):  
Brian G. Vegetabile ◽  
Stephanie A. Stout-Oswald ◽  
Elysia Poggi Davis ◽  
Tallie Z. Baram ◽  
Hal S. Stern

Predictability of behavior is an important characteristic in many fields including biology, medicine, marketing, and education. When a sequence of actions performed by an individual can be modeled as a stationary time-homogeneous Markov chain the predictability of the individual’s behavior can be quantified by the entropy rate of the process. This article compares three estimators of the entropy rate of finite Markov processes. The first two methods directly estimate the entropy rate through estimates of the transition matrix and stationary distribution of the process. The third method is related to the sliding-window Lempel–Ziv compression algorithm. The methods are compared via a simulation study and in the context of a study of interactions between mothers and their children.


Proceedings ◽  
2020 ◽  
Vol 54 (1) ◽  
pp. 28
Author(s):  
Paula Carracedo-Reboredo ◽  
Cristian R. Munteanu ◽  
Humbert González-Díaz ◽  
Carlos Fernandez-Lozano

Markov Chain Molecular Descriptors (MCDs) have been largely used to solve Cheminformatics problems. The software to perform the calculation is not always available for general users. In this work, we developed the first library in R for the calculation of MCDs and we also report the first public web server for the calculation of MCDs online that include the calculation of a new class of MCDs called Markov Singular values. We also report the first Cheminformatics study of the biological activity of 5644 compounds against colorectal cancer.


2014 ◽  
Vol 51 (A) ◽  
pp. 377-389 ◽  
Author(s):  
Peter W. Glynn ◽  
Chang-Han Rhee

We introduce a new class of Monte Carlo methods, which we call exact estimation algorithms. Such algorithms provide unbiased estimators for equilibrium expectations associated with real-valued functionals defined on a Markov chain. We provide easily implemented algorithms for the class of positive Harris recurrent Markov chains, and for chains that are contracting on average. We further argue that exact estimation in the Markov chain setting provides a significant theoretical relaxation relative to exact simulation methods.


2014 ◽  
Vol 14 (01) ◽  
pp. 1550003 ◽  
Author(s):  
Liu Yang ◽  
Kai-Xuan Zheng ◽  
Neng-Gang Xie ◽  
Ye Ye ◽  
Lu Wang

For a multi-agent spatial Parrondo's model that it is composed of games A and B, we use the discrete time Markov chains to derive the probability transition matrix. Then, we respectively deduce the stationary distribution for games A and B played individually and the randomized combination of game A + B. We notice that under a specific set of parameters, two absorbing states instead of a fixed stationary distribution exist in game B. However, the randomized game A + B can jump out of the absorbing states of game B and has a fixed stationary distribution because of the "agitating" role of game A. Moreover, starting at different initial states, we deduce the probabilities of absorption at the two absorbing barriers.


1994 ◽  
Vol 26 (4) ◽  
pp. 988-1005 ◽  
Author(s):  
Bernard Van Cutsem ◽  
Bernard Ycart

This paper studies the absorption time of an integer-valued Markov chain with a lower-triangular transition matrix. The main results concern the asymptotic behavior of the absorption time when the starting point tends to infinity (asymptotics of moments and central limit theorem). They are obtained using stochastic comparison for Markov chains and the classical theorems of renewal theory. Applications to the description of large random chains of partitions and large random ordered partitions are given.


Sign in / Sign up

Export Citation Format

Share Document