On an approximation made when analysing stochastic processes

1976 ◽  
Vol 13 (4) ◽  
pp. 672-683 ◽  
Author(s):  
Byron J. T. Morgan ◽  
John P. Hinde

We investigate the effect of a particular mode of approximation by means of four examples of its use; in each case the model approximated is a Markov process with discrete states in continuous time.

1976 ◽  
Vol 13 (04) ◽  
pp. 672-683 ◽  
Author(s):  
Byron J. T. Morgan ◽  
John P. Hinde

We investigate the effect of a particular mode of approximation by means of four examples of its use; in each case the model approximated is a Markov process with discrete states in continuous time.


2017 ◽  
Vol 13 (3) ◽  
pp. 7244-7256
Author(s):  
Mi los lawa Sokol

The matrices of non-homogeneous Markov processes consist of time-dependent functions whose values at time form typical intensity matrices. For solvingsome problems they must be changed into stochastic matrices. A stochas-tic matrix for non-homogeneous Markov process consists of time-dependent functions, whose values are probabilities and it depend on assumed time pe- riod. In this paper formulas for these functions are derived. Although the formula is not simple, it allows proving some theorems for Markov stochastic processes, well known for homogeneous processes, but for non-homogeneous ones the proofs of them turned out shorter.


2017 ◽  
Vol 24 (4) ◽  
pp. 57-66 ◽  
Author(s):  
Jerzy Girtler

Abstract The paper presents the specificity of operation of propulsion systems of seagoing ships which causes the need to control the load on them, especially on their engines called main engines. The characteristics of the load on the propulsion systems, especially on the main engines as well as on the shaft lines and propellers driven by the engines, along with the process of wear in tribological joints (sliding tribological systems) of the machines have been described herein. Using examples of typical types of wear (both linear and volumetric) for the tribological systems of this sort, interpretation of states of their wear has been provided with regards to the wear levels defined as acceptable, unacceptable and catastrophic. The following hypotheses have been formulated: 1) hypothesis explaining necessity to consider the loads on the systems under operation as stochastic processes; 2) hypothesis explaining a possibility of considering the processes as stationary; and 3) hypothesis explaining why it can be assumed that a given technical state of any tribological system can be considered as dependent only on the directly preceding state and stochastically independent of the states that existed earlier. Accepting the hypotheses as true, a four-state continuous-time semi-Markov process has been proposed in the form of a model of changes in condition of a propulsion system (PS) of any ship. The model includes the most significant states affecting safety of a ship at sea, such as: s0 - PS ability state, s1 - PS disability state due to damage to the main engine (ME), s2 - PS disability state due to damage to the shaft line (SL) and s3 - PS disability state due to damage to the propeller (P). Probability of occurrence (changes) of the states has also been demonstrated.


Author(s):  
D. R. Cox

ABSTRACTThe exponential distribution is very important in the theory of stochastic processes with discrete states in continuous time. A. K. Erlang suggested a method of extending to other distributions methods that apply in the first instance only to exponential distributions. His idea is generalized to cover all distributions with rational Laplace transforms; this involves the formal use of complex transition probabilities. Properties of the method are considered.


Author(s):  
D. R. Cox

ABSTRACTCertain stochastic processes with discrete states in continuous time can be converted into Markov processes by the well-known method of including supplementary variables. It is shown that the resulting integro-differential equations simplify considerably when some distributions associated with the process have rational Laplace transforms. The results justify the formal use of complex transition probabilities. Conditions under which it is likely to be possible to obtain a solution for arbitrary distributions are examined, and the results are related briefly to other methods of investigating these processes.


Author(s):  
M. V. Noskov ◽  
M. V. Somova ◽  
I. M. Fedotova

The article proposes a model for forecasting the success of student’s learning. The model is a Markov process with continuous time, such as the process of “death and reproduction”. As the parameters of the process, the intensities of the processes of obtaining and assimilating information are offered, and the intensity of the process of assimilating information takes into account the attitude of the student to the subject being studied. As a result of applying the model, it is possible for each student to determine the probability of a given formation of ownership of the material being studied in the near future. Thus, in the presence of an automated information system of the university, the implementation of the model is an element of the decision support system by all participants in the educational process. The examples given in the article are the results of an experiment conducted at the Institute of Space and Information Technologies of Siberian Federal University under conditions of blended learning, that is, under conditions when classroom work is accompanied by independent work with electronic resources.


Author(s):  
Leonid Petrov ◽  
Axel Saenz

AbstractWe obtain a new relation between the distributions $$\upmu _t$$ μ t at different times $$t\ge 0$$ t ≥ 0 of the continuous-time totally asymmetric simple exclusion process (TASEP) started from the step initial configuration. Namely, we present a continuous-time Markov process with local interactions and particle-dependent rates which maps the TASEP distributions $$\upmu _t$$ μ t backwards in time. Under the backwards process, particles jump to the left, and the dynamics can be viewed as a version of the discrete-space Hammersley process. Combined with the forward TASEP evolution, this leads to a stationary Markov dynamics preserving $$\upmu _t$$ μ t which in turn brings new identities for expectations with respect to $$\upmu _t$$ μ t . The construction of the backwards dynamics is based on Markov maps interchanging parameters of Schur processes, and is motivated by bijectivizations of the Yang–Baxter equation. We also present a number of corollaries, extensions, and open questions arising from our constructions.


Author(s):  
Funda Iscioglu

In multi-state modelling a system and its components have a range of performance levels from perfect functioning to complete failure. Such a modelling is more flexible to understand the behaviour of mechanical systems. To evaluate a system’s dynamic performance, lifetime analysis of a multi-state system has been considered in many research articles. The order statistics related analysis for the lifetime properties of multi-state k-out-of-n systems have recently been studied in the literature in case of homogeneous continuous time Markov process assumption. In this paper, we develop the reliability measures for multi-state k-out-of-n systems by assuming a non-homogeneous continuous time Markov process for the components which provides time dependent transition rates between states of the components. Therefore, we capture the effect of age on the state change of the components in the analysis which is typical of many systems and more practical to use in real life applications.


1967 ◽  
Vol 4 (2) ◽  
pp. 402-405 ◽  
Author(s):  
H. D. Miller

Let X(t) be the position at time t of a particle undergoing a simple symmetrical random walk in continuous time, i.e. the particle starts at the origin at time t = 0 and at times T1, T1 + T2, … it undergoes jumps ξ1, ξ2, …, where the time intervals T1, T2, … between successive jumps are mutually independent random variables each following the exponential density e–t while the jumps, which are independent of the τi, are mutually independent random variables with the distribution . The process X(t) is clearly a Markov process whose state space is the set of all integers.


Sign in / Sign up

Export Citation Format

Share Document