Two Markov Models of Neighborhood Housing Turnover

1972 ◽  
Vol 4 (2) ◽  
pp. 133-146 ◽  
Author(s):  
G Gilbert

This paper develops two mathematical models of housing turnover in a neighborhood. The first of these draws upon the theory of non-homogeneous Markov processes and includes the effects of present neighborhood composition upon future turnover probabilities. The second model considers the turnover process as a Markov renewal process and therefore allows the inclusion of length of occupancy as a determinant of transition probabilities. Example calculations for both models are included, and procedures for using the models are outlined.

1997 ◽  
Vol 29 (4) ◽  
pp. 909-946 ◽  
Author(s):  
Frank Ball

The gating mechanism of a single ion channel is usually modelled by a continuous-time Markov chain with a finite state space, partitioned into two classes termed ‘open’ and ‘closed’. It is possible to observe only which class the process is in. A burst of channel openings is defined to be a succession of open sojourns separated by closed sojourns all having duration less than t0. Let N(t) be the number of bursts commencing in (0, t]. Then are measures of the degree of temporal clustering of bursts. We develop two methods for determining the above measures. The first method uses an embedded Markov renewal process and remains valid when the underlying channel process is semi-Markov and/or brief sojourns in either the open or closed classes of state are undetected. The second method uses a ‘backward’ differential-difference equation.The observed channel process when brief sojourns are undetected can be modelled by an embedded Markov renewal process, whose kernel is shown, by exploiting connections with bursts when all sojourns are detected, to satisfy a differential-difference equation. This permits a unified derivation of both exact and approximate expressions for the kernel, and leads to a thorough asymptotic analysis of the kernel as the length of undetected sojourns tends to zero.


Author(s):  
Shirin Kordnoori ◽  
Hamidreza Mostafaei ◽  
Shaghayegh Kordnoori ◽  
Mohammadmohsen Ostadrahimi

Semi-Markov processes can be considered as a generalization of both Markov and renewal processes. One of the principal characteristics of these processes is that in opposition to Markov models, they represent systems whose evolution is dependent not only on their last visited state but on the elapsed time since this state. Semi-Markov processes are replacing the exponential distribution of time intervals with an optional distribution. In this paper, we give a statistical approach to test the semi-Markov hypothesis. Moreover, we describe a Monte Carlo algorithm able to simulate the trajectories of the semi-Markov chain. This simulation method is used to test the semi-Markov model by comparing and analyzing the results with empirical data. We introduce the database of Network traffic which is employed for applying the Monte Carlo algorithm. The statistical characteristics of real and synthetic data from the models are compared. The comparison between the semi-Markov and the Markov models is done by computing the Autocorrelation functions and the probability density functions of the Network traffic real and simulated data as well. All the comparisons admit that the Markovian hypothesis is rejected in favor of the more general semi Markov one. Finally, the interval transition probabilities which show the future predictions of the Network traffic are given.


1985 ◽  
Vol 22 (02) ◽  
pp. 324-335 ◽  
Author(s):  
J. M. McNamara

This paper discusses a renewal process whose time development between renewals is described by a Markov process. The process may be controlled by choosing the times at which renewal occurs, the objective of the control being to maximise the long-term average rate of reward. Let γ ∗ denote the maximum achievable rate. We consider a specific policy in which a sequence of estimates of γ ∗ is made. This sequence is defined inductively as follows. Initially an (a priori)estimate γo is chosen. On making the nth renewal one estimates γ ∗ in terms of γ o, the total rewards obtained in the first n renewal cycles and the total length of these cycles. γ n then determines the length of the (n + 1)th cycle. It is shown that γ n tends to γ ∗ as n tends to∞, and that this policy is optimal. The time at which the (n + 1)th renewal is made is determined by solving a stopping problem for the Markov process with continuation cost γ n per unit time and stopping reward equal to the renewal reward. Thus, in general, implementation of this policy requires a knowledge of the transition probabilities of the Markov process. An example is presented in which one needs to know essentially nothing about the details of this process or the fine details of the reward structure in order to implement the policy. The example is based on a problem in biology.


Author(s):  
KODO ITO ◽  
TOSHIO NAKAGAWA

As an airframe has finite lifetime and has to be designed lightweight, the maintenance of airframe is indispensable to operate aircraft without any serious troubles. After an airframe begins to operate, it suffers stresses and the stress causes the damage such as cracks of the airframe. Cracks grow with operation time and cause catastrophic phenomenon such as the mid-air disintegration when they become greater than a critical size. So, the managerial crack size is prespecified and Preventive Maintenance (PM) undergoes when the inspected crack size exceeds it. In this paper, optimal PM policies of airframe crack failure are discussed. Airframe states are represented as the Markov renewal process, and one-step transition probabilities are discussed. The total expected cost from the start of operation to the end by failure is defined and the optimal PM policies which minimize it is discussed.


1997 ◽  
Vol 29 (04) ◽  
pp. 909-946 ◽  
Author(s):  
Frank Ball

The gating mechanism of a single ion channel is usually modelled by a continuous-time Markov chain with a finite state space, partitioned into two classes termed ‘open’ and ‘closed’. It is possible to observe only which class the process is in. A burst of channel openings is defined to be a succession of open sojourns separated by closed sojourns all having duration less than t 0 . Let N(t) be the number of bursts commencing in (0, t]. Then are measures of the degree of temporal clustering of bursts. We develop two methods for determining the above measures. The first method uses an embedded Markov renewal process and remains valid when the underlying channel process is semi-Markov and/or brief sojourns in either the open or closed classes of state are undetected. The second method uses a ‘backward’ differential-difference equation. The observed channel process when brief sojourns are undetected can be modelled by an embedded Markov renewal process, whose kernel is shown, by exploiting connections with bursts when all sojourns are detected, to satisfy a differential-difference equation. This permits a unified derivation of both exact and approximate expressions for the kernel, and leads to a thorough asymptotic analysis of the kernel as the length of undetected sojourns tends to zero.


2014 ◽  
Vol 51 (1) ◽  
pp. 13-36 ◽  
Author(s):  
Giovanni Masala ◽  
Giuseppina Cannas ◽  
Marco Micocci

SUMMARY In this paper we apply a parametric semi-Markov process to model the dynamic evolution of HIV-1 infected patients. The seriousness of the infection is rendered by the CD4+ T-lymphocyte counts. For this purpose we introduce the main features of nonhomogeneous semi-Markov models. After determining the transition probabilities and the waiting time distributions in each state of the disease, we solve the evolution equations of the process in order to estimate the interval transition probabilities. These quantities appear to be of fundamental importance for clinical predictions. We also estimate the survival probabilities for HIV infected patients and compare them with respect to certain categories, such as gender, age group or type of antiretroviral therapy. Finally we attach a reward structure to the aforementioned semi-Markov processes in order to estimate clinical costs. For this purpose we generate random trajectories from the semi-Markov processes through Monte Carlo simulation. The proposed model is then applied to a large database provided by ISS (Istituto Superiore di Sanità, Rome, Italy), and all the quantities of interest are computed.


1985 ◽  
Vol 22 (2) ◽  
pp. 324-335 ◽  
Author(s):  
J. M. McNamara

This paper discusses a renewal process whose time development between renewals is described by a Markov process. The process may be controlled by choosing the times at which renewal occurs, the objective of the control being to maximise the long-term average rate of reward. Let γ ∗ denote the maximum achievable rate. We consider a specific policy in which a sequence of estimates of γ ∗ is made. This sequence is defined inductively as follows. Initially an (a priori)estimate γo is chosen. On making the nth renewal one estimates γ ∗ in terms of γo, the total rewards obtained in the first n renewal cycles and the total length of these cycles. γ n then determines the length of the (n + 1)th cycle. It is shown that γ n tends to γ ∗ as n tends to∞, and that this policy is optimal.The time at which the (n + 1)th renewal is made is determined by solving a stopping problem for the Markov process with continuation cost γ n per unit time and stopping reward equal to the renewal reward. Thus, in general, implementation of this policy requires a knowledge of the transition probabilities of the Markov process. An example is presented in which one needs to know essentially nothing about the details of this process or the fine details of the reward structure in order to implement the policy. The example is based on a problem in biology.


Author(s):  
E.P. Petrov ◽  
I.S. Trubin ◽  
E.V. Medvedeva ◽  
S.M. Smolskiy

This chapter is devoted to Mathematical Models (MM) of Digital Half-Tone Images (DHTI) and their video-sequences presented as causal multi-dimensional Markov Processes (MP) on discrete meshes. The difficulties of MM development for DHTI video-sequences of Markov type are shown. These difficulties are related to the enormous volume of computational operations required for their realization. The method of MM-DHTI construction and their statistically correlated video-sequences on the basis of the causal multi-dimensional multi-value MM is described in detail. Realization of such operations is not computationally intensive; Markov models from the second to fourth order demonstrate this. The proposed method is especially effective when DHTI is represented by low-bit (4-8 bits) binary numbers.


Author(s):  
M. Vidyasagar

This book explores important aspects of Markov and hidden Markov processes and the applications of these ideas to various problems in computational biology. It starts from first principles, so that no previous knowledge of probability is necessary. However, the work is rigorous and mathematical, making it useful to engineers and mathematicians, even those not interested in biological applications. A range of exercises is provided, including drills to familiarize the reader with concepts and more advanced problems that require deep thinking about the theory. Biological applications are taken from post-genomic biology, especially genomics and proteomics. The topics examined include standard material such as the Perron–Frobenius theorem, transient and recurrent states, hitting probabilities and hitting times, maximum likelihood estimation, the Viterbi algorithm, and the Baum–Welch algorithm. The book contains discussions of extremely useful topics not usually seen at the basic level, such as ergodicity of Markov processes, Markov Chain Monte Carlo (MCMC), information theory, and large deviation theory for both i.i.d and Markov processes. It also presents state-of-the-art realization theory for hidden Markov models. Among biological applications, it offers an in-depth look at the BLAST (Basic Local Alignment Search Technique) algorithm, including a comprehensive explanation of the underlying theory. Other applications such as profile hidden Markov models are also explored.


Mathematics ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 55
Author(s):  
P.-C.G. Vassiliou

For a G-inhomogeneous semi-Markov chain and G-inhomogeneous Markov renewal processes, we study the change from real probability measure into a forward probability measure. We find the values of risky bonds using the forward probabilities that the bond will not default up to maturity time for both processes. It is established in the form of a theorem that the forward probability measure does not alter the semi Markov structure. In addition, foundation of a G-inhohomogeneous Markov renewal process is done and a theorem is provided where it is proved that the Markov renewal process is maintained under the forward probability measure. We show that for an inhomogeneous semi-Markov there are martingales that characterize it. We show that the same is true for a Markov renewal processes. We discuss in depth the calibration of the G-inhomogeneous semi-Markov chain model and propose an algorithm for it. We conclude with an application for risky bonds.


Sign in / Sign up

Export Citation Format

Share Document