Bounds for the Reliability of Multistate Systems with Partially Ordered State Spaces and Stochastically Monotone Markov Transitions

Author(s):  
Bo Henry Lindqvist

Consider a multistate system with partially ordered state space E, which is divided into a set C of working states and a set D of failure states. Let X(t) be the state of the system at time t and suppose {X(t)} is a stochastically monotone Markov chain on E. Let T be the failure time, i.e., the hitting time of the set D. We derive upper and lower bounds for the reliability of the system, defined as Pm(T > t) where m is the state of perfect system performance.

1987 ◽  
Vol 24 (03) ◽  
pp. 679-695 ◽  
Author(s):  
Bo Henry Lindqvist

We study monotone and associated Markov chains on finite partially ordered state spaces. Both discrete and continuous time, and both time-homogeneous and time-inhomogeneous chains are considered. The results are applied to binary and multistate reliability theory.


1987 ◽  
Vol 24 (3) ◽  
pp. 679-695 ◽  
Author(s):  
Bo Henry Lindqvist

We study monotone and associated Markov chains on finite partially ordered state spaces. Both discrete and continuous time, and both time-homogeneous and time-inhomogeneous chains are considered. The results are applied to binary and multistate reliability theory.


2017 ◽  
Vol 32 (4) ◽  
pp. 495-521 ◽  
Author(s):  
Paweł Lorek

For a Markov chain on a finite partially ordered state space, we show that its Siegmund dual exists if and only if the chain is Möbius monotone. This is an extension of Siegmund's result for totally ordered state spaces, in which case the existence of the dual is equivalent to the usual stochastic monotonicity. Exploiting the relation between the stationary distribution of an ergodic chain and the absorption probabilities of its Siegmund dual, we present three applications: calculating the absorption probabilities of a chain with two absorbing states knowing the stationary distribution of the other chain; calculating the stationary distribution of an ergodic chain knowing the absorption probabilities of the other chain; and providing a stable simulation scheme for the stationary distribution of a chain provided we can simulate its Siegmund dual. These are accompanied by concrete examples: the gambler's ruin problem with arbitrary winning/losing probabilities; a non-symmetric game; an extension of a birth and death chain; a chain corresponding to the Fisher–Wright model; a non-standard tandem network of two servers, and the Ising model on a circle. We also show that one can construct a strong stationary dual chain by taking the appropriate Doob transform of the Siegmund dual of the time-reversed chain.


2013 ◽  
Vol 63 (6) ◽  
Author(s):  
Xiaosheng Zhu

AbstractLet φ be a homomorphism from the partially ordered abelian group (S, v) to the partially ordered abelian group (G, u) with φ(v) = u, where v and u are order units of S and G respectively. Then φ induces an affine map φ* from the state space St(G, u) to the state space St(S, v). Firstly, in this paper, we give some suitable conditions under which φ* is injective, surjective or bijective. Let R be a semilocal ring with the Jacobson radical J(R) and let π: R → R/J(R) be a canonical map. We discuss the affine map (K 0 π)*. Secondly, for a semiprime right Goldie ring R with the maximal right quotient ring Q, we consider the relations between St(R) and St(Q). Some results from [ALFARO, R.: State spaces, finite algebras, and skew group rings, J. Algebra 139 (1991), 134–154] and [GOODEARL, K. R.-WARFIELD, R. B., Jr.: State spaces of K 0 of noetherian rings, J. Algebra 71 (1981), 322–378] are extended.


2021 ◽  
Vol 58 (2) ◽  
pp. 372-393
Author(s):  
H. M. Jansen

AbstractOur aim is to find sufficient conditions for weak convergence of stochastic integrals with respect to the state occupation measure of a Markov chain. First, we study properties of the state indicator function and the state occupation measure of a Markov chain. In particular, we establish weak convergence of the state occupation measure under a scaling of the generator matrix. Then, relying on the connection between the state occupation measure and the Dynkin martingale, we provide sufficient conditions for weak convergence of stochastic integrals with respect to the state occupation measure. We apply our results to derive diffusion limits for the Markov-modulated Erlang loss model and the regime-switching Cox–Ingersoll–Ross process.


2021 ◽  
Author(s):  
Andrea Marin ◽  
Carla Piazza ◽  
Sabina Rossi

AbstractIn this paper, we deal with the lumpability approach to cope with the state space explosion problem inherent to the computation of the stationary performance indices of large stochastic models. The lumpability method is based on a state aggregation technique and applies to Markov chains exhibiting some structural regularity. Moreover, it allows one to efficiently compute the exact values of the stationary performance indices when the model is actually lumpable. The notion of quasi-lumpability is based on the idea that a Markov chain can be altered by relatively small perturbations of the transition rates in such a way that the new resulting Markov chain is lumpable. In this case, only upper and lower bounds on the performance indices can be derived. Here, we introduce a novel notion of quasi-lumpability, named proportional lumpability, which extends the original definition of lumpability but, differently from the general definition of quasi-lumpability, it allows one to derive exact stationary performance indices for the original process. We then introduce the notion of proportional bisimilarity for the terms of the performance process algebra PEPA. Proportional bisimilarity induces a proportional lumpability on the underlying continuous-time Markov chains. Finally, we prove some compositionality results and show the applicability of our theory through examples.


2021 ◽  
Author(s):  
Moritz Mühlenthaler ◽  
Alexander Raß ◽  
Manuel Schmitt ◽  
Rolf Wanka

AbstractMeta-heuristics are powerful tools for solving optimization problems whose structural properties are unknown or cannot be exploited algorithmically. We propose such a meta-heuristic for a large class of optimization problems over discrete domains based on the particle swarm optimization (PSO) paradigm. We provide a comprehensive formal analysis of the performance of this algorithm on certain “easy” reference problems in a black-box setting, namely the sorting problem and the problem OneMax. In our analysis we use a Markov model of the proposed algorithm to obtain upper and lower bounds on its expected optimization time. Our bounds are essentially tight with respect to the Markov model. We show that for a suitable choice of algorithm parameters the expected optimization time is comparable to that of known algorithms and, furthermore, for other parameter regimes, the algorithm behaves less greedy and more explorative, which can be desirable in practice in order to escape local optima. Our analysis provides a precise insight on the tradeoff between optimization time and exploration. To obtain our results we introduce the notion of indistinguishability of states of a Markov chain and provide bounds on the solution of a recurrence equation with non-constant coefficients by integration.


Sign in / Sign up

Export Citation Format

Share Document