Dynamic performance evaluation of multi – state systems under non – homogeneous continuous time Markov process degradation using lifetimes in terms of order statistics

Author(s):  
Funda Iscioglu

In multi-state modelling a system and its components have a range of performance levels from perfect functioning to complete failure. Such a modelling is more flexible to understand the behaviour of mechanical systems. To evaluate a system’s dynamic performance, lifetime analysis of a multi-state system has been considered in many research articles. The order statistics related analysis for the lifetime properties of multi-state k-out-of-n systems have recently been studied in the literature in case of homogeneous continuous time Markov process assumption. In this paper, we develop the reliability measures for multi-state k-out-of-n systems by assuming a non-homogeneous continuous time Markov process for the components which provides time dependent transition rates between states of the components. Therefore, we capture the effect of age on the state change of the components in the analysis which is typical of many systems and more practical to use in real life applications.

2011 ◽  
Vol 48 (02) ◽  
pp. 322-332 ◽  
Author(s):  
Amine Asselah ◽  
Pablo A. Ferrari ◽  
Pablo Groisman

Consider a continuous-time Markov process with transition rates matrixQin the state space Λ ⋃ {0}. In the associated Fleming-Viot processNparticles evolve independently in Λ with transition rates matrixQuntil one of them attempts to jump to state 0. At this moment the particle jumps to one of the positions of the other particles, chosen uniformly at random. When Λ is finite, we show that the empirical distribution of the particles at a fixed time converges asN→ ∞ to the distribution of a single particle at the same time conditioned on not touching {0}. Furthermore, the empirical profile of the unique invariant measure for the Fleming-Viot process withNparticles converges asN→ ∞ to the unique quasistationary distribution of the one-particle motion. A key element of the approach is to show that the two-particle correlations are of order 1 /N.


2011 ◽  
Vol 48 (2) ◽  
pp. 322-332 ◽  
Author(s):  
Amine Asselah ◽  
Pablo A. Ferrari ◽  
Pablo Groisman

Consider a continuous-time Markov process with transition rates matrix Q in the state space Λ ⋃ {0}. In the associated Fleming-Viot process N particles evolve independently in Λ with transition rates matrix Q until one of them attempts to jump to state 0. At this moment the particle jumps to one of the positions of the other particles, chosen uniformly at random. When Λ is finite, we show that the empirical distribution of the particles at a fixed time converges as N → ∞ to the distribution of a single particle at the same time conditioned on not touching {0}. Furthermore, the empirical profile of the unique invariant measure for the Fleming-Viot process with N particles converges as N → ∞ to the unique quasistationary distribution of the one-particle motion. A key element of the approach is to show that the two-particle correlations are of order 1 / N.


Author(s):  
M. V. Noskov ◽  
M. V. Somova ◽  
I. M. Fedotova

The article proposes a model for forecasting the success of student’s learning. The model is a Markov process with continuous time, such as the process of “death and reproduction”. As the parameters of the process, the intensities of the processes of obtaining and assimilating information are offered, and the intensity of the process of assimilating information takes into account the attitude of the student to the subject being studied. As a result of applying the model, it is possible for each student to determine the probability of a given formation of ownership of the material being studied in the near future. Thus, in the presence of an automated information system of the university, the implementation of the model is an element of the decision support system by all participants in the educational process. The examples given in the article are the results of an experiment conducted at the Institute of Space and Information Technologies of Siberian Federal University under conditions of blended learning, that is, under conditions when classroom work is accompanied by independent work with electronic resources.


Author(s):  
Leonid Petrov ◽  
Axel Saenz

AbstractWe obtain a new relation between the distributions $$\upmu _t$$ μ t at different times $$t\ge 0$$ t ≥ 0 of the continuous-time totally asymmetric simple exclusion process (TASEP) started from the step initial configuration. Namely, we present a continuous-time Markov process with local interactions and particle-dependent rates which maps the TASEP distributions $$\upmu _t$$ μ t backwards in time. Under the backwards process, particles jump to the left, and the dynamics can be viewed as a version of the discrete-space Hammersley process. Combined with the forward TASEP evolution, this leads to a stationary Markov dynamics preserving $$\upmu _t$$ μ t which in turn brings new identities for expectations with respect to $$\upmu _t$$ μ t . The construction of the backwards dynamics is based on Markov maps interchanging parameters of Schur processes, and is motivated by bijectivizations of the Yang–Baxter equation. We also present a number of corollaries, extensions, and open questions arising from our constructions.


2002 ◽  
Vol 43 (4) ◽  
pp. 541-557 ◽  
Author(s):  
Xianping Guo ◽  
Weiping Zhu

AbstractIn this paper, we consider denumerable state continuous time Markov decision processes with (possibly unbounded) transition and cost rates under average criterion. We present a set of conditions and prove the existence of both average cost optimal stationary policies and a solution of the average optimality equation under the conditions. The results in this paper are applied to an admission control queue model and controlled birth and death processes.


1993 ◽  
Vol 25 (01) ◽  
pp. 82-102
Author(s):  
M. G. Nair ◽  
P. K. Pollett

In a recent paper, van Doorn (1991) explained how quasi-stationary distributions for an absorbing birth-death process could be determined from the transition rates of the process, thus generalizing earlier work of Cavender (1978). In this paper we shall show that many of van Doorn's results can be extended to deal with an arbitrary continuous-time Markov chain over a countable state space, consisting of an irreducible class, C, and an absorbing state, 0, which is accessible from C. Some of our results are extensions of theorems proved for honest chains in Pollett and Vere-Jones (1992). In Section 3 we prove that a probability distribution on C is a quasi-stationary distribution if and only if it is a µ-invariant measure for the transition function, P. We shall also show that if m is a quasi-stationary distribution for P, then a necessary and sufficient condition for m to be µ-invariant for Q is that P satisfies the Kolmogorov forward equations over C. When the remaining forward equations hold, the quasi-stationary distribution must satisfy a set of ‘residual equations' involving the transition rates into the absorbing state. The residual equations allow us to determine the value of µ for which the quasi-stationary distribution is µ-invariant for P. We also prove some more general results giving bounds on the values of µ for which a convergent measure can be a µ-subinvariant and then µ-invariant measure for P. The remainder of the paper is devoted to the question of when a convergent µ-subinvariant measure, m, for Q is a quasi-stationary distribution. Section 4 establishes a necessary and sufficient condition for m to be a quasi-stationary distribution for the minimal chain. In Section 5 we consider ‘single-exit' chains. We derive a necessary and sufficient condition for there to exist a process for which m is a quasi-stationary distribution. Under this condition all such processes can be specified explicitly through their resolvents. The results proved here allow us to conclude that the bounds for µ obtained in Section 3 are, in fact, tight. Finally, in Section 6, we illustrate our results by way of two examples: regular birth-death processes and a pure-birth process with absorption.


Author(s):  
Michel Mandjes ◽  
Birgit Sollie

AbstractThis paper considers a continuous-time quasi birth-death (qbd) process, which informally can be seen as a birth-death process of which the parameters are modulated by an external continuous-time Markov chain. The aim is to numerically approximate the time-dependent distribution of the resulting bivariate Markov process in an accurate and efficient way. An approach based on the Erlangization principle is proposed and formally justified. Its performance is investigated and compared with two existing approaches: one based on numerical evaluation of the matrix exponential underlying the qbd process, and one based on the uniformization technique. It is shown that in many settings the approach based on Erlangization is faster than the other approaches, while still being highly accurate. In the last part of the paper, we demonstrate the use of the developed technique in the context of the evaluation of the likelihood pertaining to a time series, which can then be optimized over its parameters to obtain the maximum likelihood estimator. More specifically, through a series of examples with simulated and real-life data, we show how it can be deployed in model selection problems that involve the choice between a qbd and its non-modulated counterpart.


1967 ◽  
Vol 4 (2) ◽  
pp. 402-405 ◽  
Author(s):  
H. D. Miller

Let X(t) be the position at time t of a particle undergoing a simple symmetrical random walk in continuous time, i.e. the particle starts at the origin at time t = 0 and at times T1, T1 + T2, … it undergoes jumps ξ1, ξ2, …, where the time intervals T1, T2, … between successive jumps are mutually independent random variables each following the exponential density e–t while the jumps, which are independent of the τi, are mutually independent random variables with the distribution . The process X(t) is clearly a Markov process whose state space is the set of all integers.


Sign in / Sign up

Export Citation Format

Share Document