Dynamics of large uncontrolled loss networks

2000 ◽  
Vol 37 (3) ◽  
pp. 685-695 ◽  
Author(s):  
Stan Zachary

This paper studies the connection between the dynamical and equilibrium behaviour of large uncontrolled loss networks. We consider the behaviour of the number of calls of each type in the network, and show that, under the limiting regime of Kelly (1986), all trajectories of the limiting dynamics converge to a single fixed point, which is necessarily that on which the limiting stationary distribution is concentrated. The approach uses Lyapunov techniques and involves the evolution of the transition rates of a stationary Markov process in such a way that it tends to reversibility.

2000 ◽  
Vol 37 (03) ◽  
pp. 685-695
Author(s):  
Stan Zachary

This paper studies the connection between the dynamical and equilibrium behaviour of large uncontrolled loss networks. We consider the behaviour of the number of calls of each type in the network, and show that, under the limiting regime of Kelly (1986), all trajectories of the limiting dynamics converge to a single fixed point, which is necessarily that on which the limiting stationary distribution is concentrated. The approach uses Lyapunov techniques and involves the evolution of the transition rates of a stationary Markov process in such a way that it tends to reversibility.


2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Hasanen A. Hammad ◽  
Manuel De La Sen

Abstract We prove the existence of tripled fixed points (TFPs) of a new generalized nonlinear contraction mapping in complete cone b-metric spaces (CCbMSs). Also, we present some exciting consequences as corollaries and three nontrivial examples. Finally, we find a solution for a tripled-system of integral equations (TSIE) and discussed a unique stationary distribution for the Markov process (SDMP).


1993 ◽  
Vol 25 (01) ◽  
pp. 82-102
Author(s):  
M. G. Nair ◽  
P. K. Pollett

In a recent paper, van Doorn (1991) explained how quasi-stationary distributions for an absorbing birth-death process could be determined from the transition rates of the process, thus generalizing earlier work of Cavender (1978). In this paper we shall show that many of van Doorn's results can be extended to deal with an arbitrary continuous-time Markov chain over a countable state space, consisting of an irreducible class, C, and an absorbing state, 0, which is accessible from C. Some of our results are extensions of theorems proved for honest chains in Pollett and Vere-Jones (1992). In Section 3 we prove that a probability distribution on C is a quasi-stationary distribution if and only if it is a µ-invariant measure for the transition function, P. We shall also show that if m is a quasi-stationary distribution for P, then a necessary and sufficient condition for m to be µ-invariant for Q is that P satisfies the Kolmogorov forward equations over C. When the remaining forward equations hold, the quasi-stationary distribution must satisfy a set of ‘residual equations' involving the transition rates into the absorbing state. The residual equations allow us to determine the value of µ for which the quasi-stationary distribution is µ-invariant for P. We also prove some more general results giving bounds on the values of µ for which a convergent measure can be a µ-subinvariant and then µ-invariant measure for P. The remainder of the paper is devoted to the question of when a convergent µ-subinvariant measure, m, for Q is a quasi-stationary distribution. Section 4 establishes a necessary and sufficient condition for m to be a quasi-stationary distribution for the minimal chain. In Section 5 we consider ‘single-exit' chains. We derive a necessary and sufficient condition for there to exist a process for which m is a quasi-stationary distribution. Under this condition all such processes can be specified explicitly through their resolvents. The results proved here allow us to conclude that the bounds for µ obtained in Section 3 are, in fact, tight. Finally, in Section 6, we illustrate our results by way of two examples: regular birth-death processes and a pure-birth process with absorption.


Author(s):  
Funda Iscioglu

In multi-state modelling a system and its components have a range of performance levels from perfect functioning to complete failure. Such a modelling is more flexible to understand the behaviour of mechanical systems. To evaluate a system’s dynamic performance, lifetime analysis of a multi-state system has been considered in many research articles. The order statistics related analysis for the lifetime properties of multi-state k-out-of-n systems have recently been studied in the literature in case of homogeneous continuous time Markov process assumption. In this paper, we develop the reliability measures for multi-state k-out-of-n systems by assuming a non-homogeneous continuous time Markov process for the components which provides time dependent transition rates between states of the components. Therefore, we capture the effect of age on the state change of the components in the analysis which is typical of many systems and more practical to use in real life applications.


2010 ◽  
Vol DMTCS Proceedings vol. AM,... (Proceedings) ◽  
Author(s):  
Thomas Fernique ◽  
Damien Regnault

International audience This paper introduces a Markov process inspired by the problem of quasicrystal growth. It acts over dimer tilings of the triangular grid by randomly performing local transformations, called $\textit{flips}$, which do not increase the number of identical adjacent tiles (this number can be thought as the tiling energy). Fixed-points of such a process play the role of quasicrystals. We are here interested in the worst-case expected number of flips to converge towards a fixed-point. Numerical experiments suggest a $\Theta (n^2)$ bound, where $n$ is the number of tiles of the tiling. We prove a $O(n^{2.5})$ upper bound and discuss the gap between this bound and the previous one. We also briefly discuss the average-case.


1975 ◽  
Vol 12 (03) ◽  
pp. 574-580 ◽  
Author(s):  
Warren W. Esty

Consider the following path, Zn (w), of a Galton-Watson process in reverse. The probabilities that ZN–n = j given ZN = i converge, as N → ∞ to a probability function of a Markov process, Xn , which I call the ‘reverse process’. If the initial state is 0, I require that the transition probabilities be the limits given not only ZN = 0 but also ZN –1 > 0. This corresponds to looking at a Galton-Watson process just prior to extinction. This paper gives the n-step transition probabilities for the reverse process, a stationary distribution if m ≠ 1, and a limit law for Xn/n if m = 1 and σ 2 < ∞. Two related results about Zcn, 0 < c < 1, for Galton-Watson processes conclude the paper.


1975 ◽  
Vol 12 (03) ◽  
pp. 605-611 ◽  
Author(s):  
Joseph A. Yahav

A discrete-time Markov process on the interval [0, 1] is considered. Sufficient conditions for the existence of a unique stationary limiting distribution are given.


Entropy ◽  
2018 ◽  
Vol 20 (9) ◽  
pp. 631
Author(s):  
Marc Harper ◽  
Dashiell Fryer

We propose the entropy of random Markov trajectories originating and terminating at the same state as a measure of the stability of a state of a Markov process. These entropies can be computed in terms of the entropy rates and stationary distributions of Markov processes. We apply this definition of stability to local maxima and minima of the stationary distribution of the Moran process with mutation and show that variations in population size, mutation rate, and strength of selection all affect the stability of the stationary extrema.


Sign in / Sign up

Export Citation Format

Share Document