LZ-Transform and Inverse LZ-Transform of a Discrete-State Continuous-Time Markov Process

Author(s):  
Anatoly Lisnianski ◽  
Ilia Frenkel ◽  
Lev Khvatskin
2012 ◽  
Vol 24 (1) ◽  
pp. 49-58 ◽  
Author(s):  
Jerzy Girtler

Abstract The paper provides justification for the necessity to define reliability of diagnosing systems (SDG) in order to develop a diagnosis on state of any technical mechanism being a diagnosed system (SDN). It has been shown that the knowledge of SDG reliability enables defining diagnosis reliability. It has been assumed that the diagnosis reliability can be defined as a diagnosis property which specifies the degree of recognizing by a diagnosing system (SDG) the actual state of the diagnosed system (SDN) which may be any mechanism, and the conditional probability p(S*/K*) of occurrence (existence) of state S* of the mechanism (SDN) as a diagnosis measure provided that at a specified reliability of SDG, the vector K* of values of diagnostic parameters implied by the state, is observed. The probability that SDG is in the state of ability during diagnostic tests and the following diagnostic inferences leading to development of a diagnosis about the SDN state, has been accepted as a measure of SDG reliability. The theory of semi-Markov processes has been used for defining the SDG reliability, that enabled to develop a SDG reliability model in the form of a seven-state (continuous-time discrete-state) semi-Markov process of changes of SDG states.


1979 ◽  
Vol 11 (2) ◽  
pp. 397-421 ◽  
Author(s):  
M. Yadin ◽  
R. Syski

The matrix of intensities of a Markov process with discrete state space and continuous time parameter undergoes random changes in time in such a way that it stays constant between random instants. The resulting non-Markovian process is analyzed with the help of supplementary process defined in terms of variations of the intensity matrix. Several examples are presented.


1979 ◽  
Vol 11 (02) ◽  
pp. 397-421
Author(s):  
M. Yadin ◽  
R. Syski

The matrix of intensities of a Markov process with discrete state space and continuous time parameter undergoes random changes in time in such a way that it stays constant between random instants. The resulting non-Markovian process is analyzed with the help of supplementary process defined in terms of variations of the intensity matrix. Several examples are presented.


Author(s):  
M. V. Noskov ◽  
M. V. Somova ◽  
I. M. Fedotova

The article proposes a model for forecasting the success of student’s learning. The model is a Markov process with continuous time, such as the process of “death and reproduction”. As the parameters of the process, the intensities of the processes of obtaining and assimilating information are offered, and the intensity of the process of assimilating information takes into account the attitude of the student to the subject being studied. As a result of applying the model, it is possible for each student to determine the probability of a given formation of ownership of the material being studied in the near future. Thus, in the presence of an automated information system of the university, the implementation of the model is an element of the decision support system by all participants in the educational process. The examples given in the article are the results of an experiment conducted at the Institute of Space and Information Technologies of Siberian Federal University under conditions of blended learning, that is, under conditions when classroom work is accompanied by independent work with electronic resources.


Author(s):  
Leonid Petrov ◽  
Axel Saenz

AbstractWe obtain a new relation between the distributions $$\upmu _t$$ μ t at different times $$t\ge 0$$ t ≥ 0 of the continuous-time totally asymmetric simple exclusion process (TASEP) started from the step initial configuration. Namely, we present a continuous-time Markov process with local interactions and particle-dependent rates which maps the TASEP distributions $$\upmu _t$$ μ t backwards in time. Under the backwards process, particles jump to the left, and the dynamics can be viewed as a version of the discrete-space Hammersley process. Combined with the forward TASEP evolution, this leads to a stationary Markov dynamics preserving $$\upmu _t$$ μ t which in turn brings new identities for expectations with respect to $$\upmu _t$$ μ t . The construction of the backwards dynamics is based on Markov maps interchanging parameters of Schur processes, and is motivated by bijectivizations of the Yang–Baxter equation. We also present a number of corollaries, extensions, and open questions arising from our constructions.


Author(s):  
Funda Iscioglu

In multi-state modelling a system and its components have a range of performance levels from perfect functioning to complete failure. Such a modelling is more flexible to understand the behaviour of mechanical systems. To evaluate a system’s dynamic performance, lifetime analysis of a multi-state system has been considered in many research articles. The order statistics related analysis for the lifetime properties of multi-state k-out-of-n systems have recently been studied in the literature in case of homogeneous continuous time Markov process assumption. In this paper, we develop the reliability measures for multi-state k-out-of-n systems by assuming a non-homogeneous continuous time Markov process for the components which provides time dependent transition rates between states of the components. Therefore, we capture the effect of age on the state change of the components in the analysis which is typical of many systems and more practical to use in real life applications.


1967 ◽  
Vol 4 (2) ◽  
pp. 402-405 ◽  
Author(s):  
H. D. Miller

Let X(t) be the position at time t of a particle undergoing a simple symmetrical random walk in continuous time, i.e. the particle starts at the origin at time t = 0 and at times T1, T1 + T2, … it undergoes jumps ξ1, ξ2, …, where the time intervals T1, T2, … between successive jumps are mutually independent random variables each following the exponential density e–t while the jumps, which are independent of the τi, are mutually independent random variables with the distribution . The process X(t) is clearly a Markov process whose state space is the set of all integers.


Sign in / Sign up

Export Citation Format

Share Document