Sufficient optimality conditions for control-limit policy in a semi-Markov process

1976 ◽  
Vol 13 (02) ◽  
pp. 400-406 ◽  
Author(s):  
I. Gertsbach

A finite-state semi-Markov process (SMP) with penalties is considered. A property which is similar to an increasing-hazard-rate property for a Markov chain is defined for an SMP. The SMP is controlled by shifts from the state Ei to immediately after a transition has occurred. Conditions are given which guarantee that the optimal stationary Markovian policy belongs to a subclass of control-limit policies.

1976 ◽  
Vol 13 (2) ◽  
pp. 400-406 ◽  
Author(s):  
I. Gertsbach

A finite-state semi-Markov process (SMP) with penalties is considered. A property which is similar to an increasing-hazard-rate property for a Markov chain is defined for an SMP. The SMP is controlled by shifts from the state Ei to immediately after a transition has occurred. Conditions are given which guarantee that the optimal stationary Markovian policy belongs to a subclass of control-limit policies.


1976 ◽  
Vol 13 (4) ◽  
pp. 696-706 ◽  
Author(s):  
David Burman

Particles enter a finite-state system and move according to independent sample paths from a semi-Markov process. Strong limit theorems are developed for the ratio of the flow of particles from states i to j and the flow out of When the cumulative arrival of particles into the system up to time t, A (t) ∼ λtα, then a.s. When A (t)∼ λekt, then the flow between states must be normalized by the Laplace–Stieltjes transform of the conditional holding time distribution, in order to make the ratio an unbiased estimator of ρij.


2012 ◽  
Vol 24 (1) ◽  
pp. 49-58 ◽  
Author(s):  
Jerzy Girtler

Abstract The paper provides justification for the necessity to define reliability of diagnosing systems (SDG) in order to develop a diagnosis on state of any technical mechanism being a diagnosed system (SDN). It has been shown that the knowledge of SDG reliability enables defining diagnosis reliability. It has been assumed that the diagnosis reliability can be defined as a diagnosis property which specifies the degree of recognizing by a diagnosing system (SDG) the actual state of the diagnosed system (SDN) which may be any mechanism, and the conditional probability p(S*/K*) of occurrence (existence) of state S* of the mechanism (SDN) as a diagnosis measure provided that at a specified reliability of SDG, the vector K* of values of diagnostic parameters implied by the state, is observed. The probability that SDG is in the state of ability during diagnostic tests and the following diagnostic inferences leading to development of a diagnosis about the SDN state, has been accepted as a measure of SDG reliability. The theory of semi-Markov processes has been used for defining the SDG reliability, that enabled to develop a SDG reliability model in the form of a seven-state (continuous-time discrete-state) semi-Markov process of changes of SDG states.


1969 ◽  
Vol 6 (3) ◽  
pp. 704-707 ◽  
Author(s):  
Thomas L. Vlach ◽  
Ralph L. Disney

The departure process from the GI/G/1 queue is shown to be a semi-Markov process imbedded at departure points with a two-dimensional state space. Transition probabilities for this process are defined and derived from the distributions of the arrival and service processes. The one step transition probabilities and a stationary distribution are obtained for the imbedded two-dimensional Markov chain.


2002 ◽  
Vol 10 (04) ◽  
pp. 337-357 ◽  
Author(s):  
SEUNGCHAN KIM ◽  
HUAI LI ◽  
EDWARD R. DOUGHERTY ◽  
NANWEI CAO ◽  
YIDONG CHEN ◽  
...  

A fundamental question in biology is whether the network of interactions that regulate gene expression can be modeled by existing mathematical techniques. Studies of the ability to predict a gene's state based on the states of other genes suggest that it may be possible to abstract sufficient information to build models of the system that retain steady-state behavioral characteristics of the real system. This study tests this possibility by: (i) constructing a finite state homogeneous Markov chain model using a small set of interesting genes; (ii) estimating the model parameters based on the observed experimental data; (iii) exploring the dynamics of this small genetic regulatory network by analyzing its steady-state (long-run) behavior and comparing the resulting model behavior to the observed behavior of the original system. The data used in this study are from a survey of melanoma where predictive relationships (coefficient of determination, CoD) between 587 genes from 31 samples were examined. Ten genes with strong interactive connectivity were chosen to formulate a finite state Markov chain on the basis of their role as drivers in the acquisition of an invasive phenotype in melanoma cells. Simulations with different perturbation probabilities and different iteration times were run. Following convergence of the chain to steady-state behavior, millions of samples of the results of further transitions were collected to estimate the steady-state distribution of network. In these samples, only a limited number of states possessed significant probability of occurrence. This behavior is nicely congruent with biological behavior, as cells appear to occupy only a negligible portion of the state space available to them. The model produced both some of the exact state vectors observed in the data, and also a number of state vectors that were near neighbors of the state vectors from the original data. By combining these similar states, a good representation of the observed states in the original data could be achieved. From this study, we find that, in this limited context, Markov chain simulation emulates well the dynamic behavior of a small regulatory network.


2004 ◽  
Vol 36 (4) ◽  
pp. 1198-1211 ◽  
Author(s):  
James Ledoux

Let (φ(Xn))n be a function of a finite-state Markov chain (Xn)n. In this article, we investigate the conditions under which the random variables φ(n) have the same distribution as Yn (for every n), where (Yn)n is a Markov chain with fixed transition probability matrix. In other words, for a deterministic function φ, we investigate the conditions under which (Xn)n is weakly lumpable for the state vector. We show that the set of all probability distributions of X0, such that (Xn)n is weakly lumpable for the state vector, can be finitely generated. The connections between our definition of lumpability and the usual one (i.e. as the proportional dynamics property) are discussed.


10.26524/cm67 ◽  
2020 ◽  
Vol 4 (1) ◽  
Author(s):  
Syed Tahir Hussainy ◽  
Mohamed Ali A ◽  
Ravi kumar S

We derive three equivalent sufficient conditions for association in time of a finite state semi Markov process in terms of transition probabilities and crude hazard rates. This result generalizes the earlier results of Esary and Proschan (1970)for a binary Markov process and Hjort, Natvig and Funnemark (1985) for a multistate Markov process.


1976 ◽  
Vol 13 (01) ◽  
pp. 108-117 ◽  
Author(s):  
Richard M. Feldman

Consider a system that is subject to a sequence of randomly occurring shocks; each shock causes some damage of random magnitude to the system. Any of the shocks might cause the system to fail, and the probability of such a failure is a function of the sum of the magnitudes of damage caused from all previous shocks. The purpose of this paper is to derive the optimal replacement rule for such a system whose cumulative damage process is a semi-Markov process. This allows for both the time between shocks and the damage due to the next shock to be dependent on the present cumulative damage level. Only policies within the class of control-limit policies will be considered; namely, policies with which no action is taken if the damage is below a fixed level, and a replacement is made if the damage is above that. An example will be given illustrating the use of the optimal replacement rule.


Sign in / Sign up

Export Citation Format

Share Document