scholarly journals On a Discrete Markov-Modulated Risk Model with Random Premium Income and Delayed Claims

2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Changwei Nie ◽  
Mi Chen ◽  
Haiyan Liu ◽  
Wenguang Yu

In this paper, a discrete Markov-modulated risk model with delayed claims, random premium income, and a constant dividend barrier is proposed. It is assumed that the random premium income and individual claims are affected by a Markov chain with finite state space. The model proposed is an extension of the discrete semi-Markov risk model with random premium income and delayed claims. Explicit expressions for the total expected discounted dividends until ruin are obtained by the method of generating function and the theory of difference equations. Finally, the effect of related parameters on the total expected discounted dividends are shown in several numerical examples.

2005 ◽  
Vol 37 (4) ◽  
pp. 1015-1034 ◽  
Author(s):  
Saul D. Jacka ◽  
Zorana Lazic ◽  
Jon Warren

Let (Xt)t≥0 be a continuous-time irreducible Markov chain on a finite state space E, let v be a map v: E→ℝ\{0}, and let (φt)t≥0 be an additive functional defined by φt=∫0tv(Xs)d s. We consider the case in which the process (φt)t≥0 is oscillating and that in which (φt)t≥0 has a negative drift. In each of these cases, we condition the process (Xt,φt)t≥0 on the event that (φt)t≥0 is nonnegative until time T and prove weak convergence of the conditioned process as T→∞.


1982 ◽  
Vol 19 (02) ◽  
pp. 272-288 ◽  
Author(s):  
P. J. Brockwell ◽  
S. I. Resnick ◽  
N. Pacheco-Santiago

A study is made of the maximum, minimum and range on [0,t] of the integral processwhereSis a finite state-space Markov chain. Approximate results are derived by establishing weak convergence of a sequence of such processes to a Wiener process. For a particular family of two-state stationary Markov chains we show that the corresponding centered integral processes exhibit the Hurst phenomenon to a remarkable degree in their pre-asymptotic behaviour.


2019 ◽  
Vol 23 ◽  
pp. 739-769
Author(s):  
Paweł Lorek

For a given absorbing Markov chain X* on a finite state space, a chain X is a sharp antidual of X* if the fastest strong stationary time (FSST) of X is equal, in distribution, to the absorption time of X*. In this paper, we show a systematic way of finding such an antidual based on some partial ordering of the state space. We use a theory of strong stationary duality developed recently for Möbius monotone Markov chains. We give several sharp antidual chains for Markov chain corresponding to a generalized coupon collector problem. As a consequence – utilizing known results on the limiting distribution of the absorption time – we indicate separation cutoffs (with their window sizes) in several chains. We also present a chain which (under some conditions) has a prescribed stationary distribution and its FSST is distributed as a prescribed mixture of sums of geometric random variables.


2005 ◽  
Vol 42 (4) ◽  
pp. 1003-1014 ◽  
Author(s):  
A. Yu. Mitrophanov

For uniformly ergodic Markov chains, we obtain new perturbation bounds which relate the sensitivity of the chain under perturbation to its rate of convergence to stationarity. In particular, we derive sensitivity bounds in terms of the ergodicity coefficient of the iterated transition kernel, which improve upon the bounds obtained by other authors. We discuss convergence bounds that hold in the case of finite state space, and consider numerical examples to compare the accuracy of different perturbation bounds.


1972 ◽  
Vol 4 (02) ◽  
pp. 318-338 ◽  
Author(s):  
Mats Rudemo

Consider a Poisson point process with an intensity parameter forming a Markov chain with continuous time and finite state space. A system of ordinary differential equations is derived for the conditional distribution of the Markov chain given observations of the point process. An estimate of the current intensity, optimal in the least-squares sense, is computed from this distribution. Applications to reliability and replacement theory are given. A special case with two states, corresponding to a process in control and out of control, is discussed at length. Adjustment rules, based on the conditional probability of the out of control state, are studied. Regarded as a function of time, this probability forms a Markov process with the unit interval as state space. For the distribution of this process, integro-differential equations are derived. They are used to compute the average long run cost of adjustment rules.


2005 ◽  
Vol 42 (04) ◽  
pp. 1003-1014 ◽  
Author(s):  
A. Yu. Mitrophanov

For uniformly ergodic Markov chains, we obtain new perturbation bounds which relate the sensitivity of the chain under perturbation to its rate of convergence to stationarity. In particular, we derive sensitivity bounds in terms of the ergodicity coefficient of the iterated transition kernel, which improve upon the bounds obtained by other authors. We discuss convergence bounds that hold in the case of finite state space, and consider numerical examples to compare the accuracy of different perturbation bounds.


1982 ◽  
Vol 19 (2) ◽  
pp. 272-288 ◽  
Author(s):  
P. J. Brockwell ◽  
S. I. Resnick ◽  
N. Pacheco-Santiago

A study is made of the maximum, minimum and range on [0, t] of the integral process where S is a finite state-space Markov chain. Approximate results are derived by establishing weak convergence of a sequence of such processes to a Wiener process. For a particular family of two-state stationary Markov chains we show that the corresponding centered integral processes exhibit the Hurst phenomenon to a remarkable degree in their pre-asymptotic behaviour.


Author(s):  
Krzysztof Bartoszek ◽  
Wojciech Bartoszek ◽  
Michał Krzemiński

AbstractWe consider a random dynamical system, where the deterministic dynamics are driven by a finite-state space Markov chain. We provide a comprehensive introduction to the required mathematical apparatus and then turn to a special focus on the susceptible-infected-recovered epidemiological model with random steering. Through simulations we visualize the behaviour of the system and the effect of the high-frequency limit of the driving Markov chain. We formulate some questions and conjectures of a purely theoretical nature.


1998 ◽  
Vol 35 (3) ◽  
pp. 557-565
Author(s):  
Alexis Bienvenüe

Let ζ be a Markov chain on a finite state space D, f a function from D to ℝd, and Sn = ∑k=1nf(ζk). We prove an invariance theorem for S and derive an explicit expression of the limit covariance matrix. We give its exact value for p-reinforced random walks on ℤ2 with p = 1, 2, 3.


2005 ◽  
Vol 37 (04) ◽  
pp. 1015-1034 ◽  
Author(s):  
Saul D. Jacka ◽  
Zorana Lazic ◽  
Jon Warren

Let (X t ) t≥0 be a continuous-time irreducible Markov chain on a finite state space E, let v be a map v: E→ℝ\{0}, and let (φ t ) t≥0 be an additive functional defined by φ t =∫0 t v(X s )d s. We consider the case in which the process (φ t ) t≥0 is oscillating and that in which (φ t ) t≥0 has a negative drift. In each of these cases, we condition the process (X t ,φ t ) t≥0 on the event that (φ t ) t≥0 is nonnegative until time T and prove weak convergence of the conditioned process as T→∞.


Sign in / Sign up

Export Citation Format

Share Document