Markov-modulated linear fluid networks with Markov additive input

2002 ◽  
Vol 39 (02) ◽  
pp. 413-420 ◽  
Author(s):  
Offer Kella ◽  
Wolfgang Stadje

We consider a network of dams to which the external input is a multivariate Markov additive process. For each state of the Markov chain modulating the Markov additive process, the release rates are linear (constant multiple of the content level). Each unit of material processed by a given station is then divided into fixed proportions each of which is routed to another station or leaves the system. For each state of the modulating process, this routeing is determined by some substochastic matrix. We identify simple conditions for stability and show how to compute transient and stationary characteristics of such networks.

2002 ◽  
Vol 39 (2) ◽  
pp. 413-420 ◽  
Author(s):  
Offer Kella ◽  
Wolfgang Stadje

We consider a network of dams to which the external input is a multivariate Markov additive process. For each state of the Markov chain modulating the Markov additive process, the release rates are linear (constant multiple of the content level). Each unit of material processed by a given station is then divided into fixed proportions each of which is routed to another station or leaves the system. For each state of the modulating process, this routeing is determined by some substochastic matrix. We identify simple conditions for stability and show how to compute transient and stationary characteristics of such networks.


2005 ◽  
Vol 35 (02) ◽  
pp. 351-361 ◽  
Author(s):  
Andrew C.Y. Ng ◽  
Hailiang Yang

In this paper, we consider a Markov-modulated risk model (also called Markovian regime switching insurance risk model). Follow Asmussen (2000, 2003), by using the theory of Markov additive process, an exponential martingale is constructed and Lundberg-type upper bounds for the joint distribution of surplus immediately before and at ruin are obtained. As a natural corollary, bounds for the distribution of the deficit at ruin are obtained. We also present some numerical results to illustrate the tightness of the bound obtained in this paper.


2020 ◽  
Vol 52 (2) ◽  
pp. 404-432
Author(s):  
Irmina Czarna ◽  
Adam Kaszubowski ◽  
Shu Li ◽  
Zbigniew Palmowski

AbstractIn this paper, we solve exit problems for a one-sided Markov additive process (MAP) which is exponentially killed with a bivariate killing intensity $\omega(\cdot,\cdot)$ dependent on the present level of the process and the current state of the environment. Moreover, we analyze the respective resolvents. All identities are expressed in terms of new generalizations of classical scale matrices for MAPs. We also remark on a number of applications of the obtained identities to (controlled) insurance risk processes. In particular, we show that our results can be applied to the Omega model, where bankruptcy takes place at rate $\omega(\cdot,\cdot)$ when the surplus process becomes negative. Finally, we consider Markov-modulated Brownian motion (MMBM) as a special case and present analytical and numerical results for a particular choice of piecewise intensity function $\omega(\cdot,\cdot)$ .


1994 ◽  
Vol 26 (4) ◽  
pp. 1117-1121 ◽  
Author(s):  
Søren Asmussen ◽  
Mogens Bladt

The mean busy period of a Markov-modulated queue or fluid model is computed by an extension of the time-reversal argument connecting the steady-state distribution and the maximum of a related Markov additive process.


1994 ◽  
Vol 26 (04) ◽  
pp. 1117-1121 ◽  
Author(s):  
Søren Asmussen ◽  
Mogens Bladt

The mean busy period of a Markov-modulated queue or fluid model is computed by an extension of the time-reversal argument connecting the steady-state distribution and the maximum of a related Markov additive process.


2021 ◽  
Vol 58 (4) ◽  
pp. 1086-1113
Author(s):  
Larbi Alili ◽  
David Woodford

AbstractConsider a Lamperti–Kiu Markov additive process $(J, \xi)$ on $\{+, -\}\times\mathbb R\cup \{-\infty\}$, where J is the modulating Markov chain component. First we study the finiteness of the exponential functional and then consider its moments and tail asymptotics under Cramér’s condition. In the strong subexponential case we determine the subexponential tails of the exponential functional under some further assumptions.


2005 ◽  
Vol 35 (2) ◽  
pp. 351-361
Author(s):  
Andrew C.Y. Ng ◽  
Hailiang Yang

In this paper, we consider a Markov-modulated risk model (also called Markovian regime switching insurance risk model). Follow Asmussen (2000, 2003), by using the theory of Markov additive process, an exponential martingale is constructed and Lundberg-type upper bounds for the joint distribution of surplus immediately before and at ruin are obtained. As a natural corollary, bounds for the distribution of the deficit at ruin are obtained. We also present some numerical results to illustrate the tightness of the bound obtained in this paper.


Markov-modulated linear regression model is a special case of the Markov-additive process (𝒀, 𝑱) = {(𝒀(𝒕), 𝑱(𝒕)), 𝒕 ≥ 𝟎}, where component J is called Markov, and component Y is additive and described by a linear regression. The component J is a continuous-time homogeneous irreducible Markov chain with the known transition intensities between the states. Usually this Markov component is called the external environment or background process. Unknown regression coefficients depend on external environment state, but regressors remain constant. This research considers the case, when the Markov property is not satisfied, namely, the sojourn time in each state is not exponentially distributed. Estimation procedure for unknown model parameters is described when it’s possible to represent transition intensities as a convolution of exponential densities. An efficiency of such an approach is evaluated by a simulation.


2021 ◽  
Vol 58 (2) ◽  
pp. 372-393
Author(s):  
H. M. Jansen

AbstractOur aim is to find sufficient conditions for weak convergence of stochastic integrals with respect to the state occupation measure of a Markov chain. First, we study properties of the state indicator function and the state occupation measure of a Markov chain. In particular, we establish weak convergence of the state occupation measure under a scaling of the generator matrix. Then, relying on the connection between the state occupation measure and the Dynkin martingale, we provide sufficient conditions for weak convergence of stochastic integrals with respect to the state occupation measure. We apply our results to derive diffusion limits for the Markov-modulated Erlang loss model and the regime-switching Cox–Ingersoll–Ross process.


Sign in / Sign up

Export Citation Format

Share Document