Vulnerability of networks of interacting Markov chains

Author(s):  
L. Kocarev ◽  
N. Zlatanov ◽  
D. Trajanov

The concept of vulnerability is introduced for a model of random, dynamical interactions on networks. In this model, known as the influence model, the nodes are arranged in an arbitrary network, while the evolution of the status at a node is according to an internal Markov chain, but with transition probabilities that depend not only on the current status of that node but also on the statuses of the neighbouring nodes. Vulnerability is treated analytically and numerically for several networks with different topological structures, as well as for two real networks—the network of infrastructures and the EU power grid—identifying the most vulnerable nodes of these networks.

2004 ◽  
Vol 2004 (8) ◽  
pp. 421-429 ◽  
Author(s):  
Souad Assoudou ◽  
Belkheir Essebbar

This note is concerned with Bayesian estimation of the transition probabilities of a binary Markov chain observed from heterogeneous individuals. The model is founded on the Jeffreys' prior which allows for transition probabilities to be correlated. The Bayesian estimator is approximated by means of Monte Carlo Markov chain (MCMC) techniques. The performance of the Bayesian estimates is illustrated by analyzing a small simulated data set.


Author(s):  
Peter L. Chesson

AbstractRandom transition probability matrices with stationary independent factors define “white noise” environment processes for Markov chains. Two examples are considered in detail. Such environment processes can be used to construct several Markov chains which are dependent, have the same transition probabilities and are jointly a Markov chain. Transition rates for such processes are evaluated. These results have application to the study of animal movements.


1968 ◽  
Vol 5 (2) ◽  
pp. 401-413 ◽  
Author(s):  
Paul J. Schweitzer

A perturbation formalism is presented which shows how the stationary distribution and fundamental matrix of a Markov chain containing a single irreducible set of states change as the transition probabilities vary. Expressions are given for the partial derivatives of the stationary distribution and fundamental matrix with respect to the transition probabilities. Semi-group properties of the generators of transformations from one Markov chain to another are investigated. It is shown that a perturbation formalism exists in the multiple subchain case if and only if the change in the transition probabilities does not alter the number of, or intermix the various subchains. The formalism is presented when this condition is satisfied.


1977 ◽  
Vol 14 (02) ◽  
pp. 298-308 ◽  
Author(s):  
Peter R. Nelson

In a single-shelf library having infinitely many books B 1 , B 2 , …, the probability of selecting each book is assumed known. Books are removed one at a time and replaced in position k prior to the next removal. Books are moved either to the right or the left as is necessary to vacate position k. Those arrangements of books where after some finite position all the books are in natural order (book i occupies position i) are considered as states in an infinite Markov chain. When k > 1, we show that the chain can never be positive recurrent. When k = 1, we find the limits of ratios of one-step transition probabilities; and when k = 1 and the chain is transient, we find the Martin exit boundary.


1976 ◽  
Vol 8 (04) ◽  
pp. 737-771 ◽  
Author(s):  
R. L. Tweedie

The aim of this paper is to present a comprehensive set of criteria for classifying as recurrent, transient, null or positive the sets visited by a general state space Markov chain. When the chain is irreducible in some sense, these then provide criteria for classifying the chain itself, provided the sets considered actually reflect the status of the chain as a whole. The first part of the paper is concerned with the connections between various definitions of recurrence, transience, nullity and positivity for sets and for irreducible chains; here we also elaborate the idea of status sets for irreducible chains. In the second part we give our criteria for classifying sets. When the state space is countable, our results for recurrence, transience and positivity reduce to the classical work of Foster (1953); for continuous-valued chains they extend results of Lamperti (1960), (1963); for general spaces the positivity and recurrence criteria strengthen those of Tweedie (1975b).


2008 ◽  
Vol 45 (03) ◽  
pp. 640-649
Author(s):  
Victor de la Peña ◽  
Henryk Gzyl ◽  
Patrick McDonald

Let W n be a simple Markov chain on the integers. Suppose that X n is a simple Markov chain on the integers whose transition probabilities coincide with those of W n off a finite set. We prove that there is an M > 0 such that the Markov chain W n and the joint distributions of the first hitting time and first hitting place of X n started at the origin for the sets {-M, M} and {-(M + 1), (M + 1)} algorithmically determine the transition probabilities of X n .


1975 ◽  
Vol 12 (04) ◽  
pp. 744-752 ◽  
Author(s):  
Richard L. Tweedie

In many Markov chain models, the immediate characteristic of importance is the positive recurrence of the chain. In this note we investigate whether positivity, and also recurrence, are robust properties of Markov chains when the transition laws are perturbed. The chains we consider are on a fairly general state space : when specialised to a countable space, our results are essentially that, if the transition matrices of two irreducible chains coincide on all but a finite number of columns, then positivity of one implies positivity of both; whilst if they coincide on all but a finite number of rows and columns, recurrence of one implies recurrence of both. Examples are given to show that these results (and their general analogues) cannot in general be strengthened.


2012 ◽  
Vol 27 (1) ◽  
pp. 53-55
Author(s):  
Sheldon M. Ross

Consider two independent Markov chains having states 0, 1, and identical transition probabilities. At each stage one of the chains is observed, and a reward equal to the observed state is earned. Assuming prior probabilities on the initial states of the chains it is shown that the myopic policy that always chooses to observe the chain most likely to be in state 1 stochastically maximizes the sequence of rewards earned in each period.


2013 ◽  
Vol 50 (04) ◽  
pp. 918-930 ◽  
Author(s):  
Marie-Anne Guerry

When a discrete-time homogenous Markov chain is observed at time intervals that correspond to its time unit, then the transition probabilities of the chain can be estimated using known maximum likelihood estimators. In this paper we consider a situation when a Markov chain is observed on time intervals with length equal to twice the time unit of the Markov chain. The issue then arises of characterizing probability matrices whose square root(s) are also probability matrices. This characterization is referred to in the literature as the embedding problem for discrete time Markov chains. The probability matrix which has probability root(s) is called embeddable. In this paper for two-state Markov chains, necessary and sufficient conditions for embeddability are formulated and the probability square roots of the transition matrix are presented in analytic form. In finding conditions for the existence of probability square roots for (k x k) transition matrices, properties of row-normalized matrices are examined. Besides the existence of probability square roots, the uniqueness of these solutions is discussed: In the case of nonuniqueness, a procedure is introduced to identify a transition matrix that takes into account the specificity of the concrete context. In the case of nonexistence of a probability root, the concept of an approximate probability root is introduced as a solution of an optimization problem related to approximate nonnegative matrix factorization.


1988 ◽  
Vol 2 (2) ◽  
pp. 267-268
Author(s):  
Sheldon M. Ross

In [1] an approach to approximate the transition probabilities and mean occupation times of a continuous-time Markov chain is presented. For the chain under consideration, let Pij(t) and Tij(t) denote respectively the probability that it is in state j at time t, and the total time spent in j by time t, in both cases conditional on the chain starting in state i. Also, let Y1,…, Yn be independent exponential random variables each with rate λ = n/t, which are also independent of the Markov chain.


Sign in / Sign up

Export Citation Format

Share Document