scholarly journals Approximations of the generalised Poisson function

1969 ◽  
Vol 5 (2) ◽  
pp. 213-226 ◽  
Author(s):  
Lauri Kauppi ◽  
Pertti Ojantakanen

One of the basic functions of risk theory is the so-called generalised Poisson function F(x), which gives the probability that the total amount of claims ξ does not exceed some given limit x during a year (or during some other fixed time period). For F(x) is obtained the well known expansion where n is the expected number of claims during this time period and Sk*(x) is the k:th convolution of the distribution function S(z) of the size of one claim. The formula (1) is, however, much too inconvenient for numerical computations and for most other applications. One of the main problems of risk theory, which is still partly open, is to find suitable methods to compute, or at least to approximate, the generalised Poisson function.A frequently used approximation is to replace F(x) by the normal distribution function having the same mean and standard deviation as F as follows: where α1 and α2 are the first zero-moments of S(z): SM(Z) is here again the distribution function of the size of one claim. To obtain more general results a reinsurance arrangement is assumed under which the maximum net retention is M. Hence the portfolio on the company's own retention is considered. If the reinsurance is of Excess of Loss type, then where S(z) is the distribution function of the size of one total claim.

1967 ◽  
Vol 4 (2) ◽  
pp. 170-174 ◽  
Author(s):  
Fredrik Esscher

When experience is insufficient to permit a direct empirical determination of the premium rates of a Stop Loss Cover, we have to fall back upon mathematical models from the theory of probability—especially the collective theory of risk—and upon such assumptions as may be considered reasonable.The paper deals with some problems connected with such calculations of Stop Loss premiums for a portfolio consisting of non-life insurances. The portfolio was so large that the values of the premium rates and other quantities required could be approximated by their limit values, obtained according to theory when the expected number of claims tends to infinity.The calculations were based on the following assumptions.Let F(x, t) denote the probability that the total amount of claims paid during a given period of time is ≤ x when the expected number of claims during the same period increases from o to t. The net premium II (x, t) for a Stop Loss reinsurance covering the amount by which the total amount of claims paid during this period may exceed x, is defined by the formula and the variance of the amount (z—x) to be paid on account of the Stop Loss Cover, by the formula As to the distribution function F(x, t) it is assumed that wherePn(t) is the probability that n claims have occurred during the given period, when the expected number of claims increases from o to t,V(x) is the distribution function of the claims, giving the conditioned probability that the amount of a claim is ≤ x when it is known that a claim has occurred, andVn*(x) is the nth convolution of the function V(x) with itself.V(x) is supposed to be normalized so that the mean = I.


2020 ◽  
Vol 52 (2) ◽  
pp. 588-616
Author(s):  
Zakhar Kabluchko ◽  
Dmitry Zaporozhets

AbstractThe Gaussian polytope $\mathcal P_{n,d}$ is the convex hull of n independent standard normally distributed points in $\mathbb{R}^d$ . We derive explicit expressions for the probability that $\mathcal P_{n,d}$ contains a fixed point $x\in\mathbb{R}^d$ as a function of the Euclidean norm of x, and the probability that $\mathcal P_{n,d}$ contains the point $\sigma X$ , where $\sigma\geq 0$ is constant and X is a standard normal vector independent of $\mathcal P_{n,d}$ . As a by-product, we also compute the expected number of k-faces and the expected volume of $\mathcal P_{n,d}$ , thus recovering the results of Affentranger and Schneider (Discr. and Comput. Geometry, 1992) and Efron (Biometrika, 1965), respectively. All formulas are in terms of the volumes of regular spherical simplices, which, in turn, can be expressed through the standard normal distribution function $\Phi(z)$ and its complex version $\Phi(iz)$ . The main tool used in the proofs is the conic version of the Crofton formula.


1970 ◽  
Vol 68 (2) ◽  
pp. 455-458
Author(s):  
J. E. A. Dunnage

Our object here is to refine the theorem proved in (3), and we use the notation of that paper. Let Z1, Z2, …, Zn, where Zr = (Xr, Yr), be independent random variables in two dimensions with zero first-order moments and finite third-order moments; and et the covariance matrix of Zr beWe writeLet (x, y) be the distribution function of the sum and let (x, y) be the normal distribution function having the same first- and second-order moments as (x, y).


1977 ◽  
Vol 9 (3) ◽  
pp. 281-289 ◽  
Author(s):  
T. Pentikäinen

Several “short cut” methods exist to approximate the total amount of claims ( = χ) of an insurance collective. The classical one is the normal approximationwhere and σx are the mean value and standard deviation of x. Φ is the normal distribution function.It is well-known that the normal approximation gives acceptable accuracy only when the volume of risk business is fairly large and the distribution of the amounts of the individual claims is not “too dangerous”, i.e. not too heterogeneous (cf. fig. 2).One way to improve the normal approximation is the so called NP-method, which provides for the standardized variable a correction Δzwhereis the skewness of the distribution F(χ). Another variant (NP3) of the NP-method also makes use of the moment μ4, but, in the following, we limit our discussion mainly to the variant (2) (= NP2).If Δz is small, a simpler formulais available (cf. fig. 2).Another approximation was introduced by Bohman and Esscher (1963). It is based on the incomplete gamma functionwhere Experiments have been made with both formulae (2) and (4); they have been applied to various F functions, from which the exact (or at least controlled) values are otherwise known. It has been proved that the accuracy is satisfactory provided that the distribution F is not very “dangerous”.


1975 ◽  
Vol 8 (3) ◽  
pp. 359-363 ◽  
Author(s):  
Erkki Pesonen

It is likely that in the future applications of actuarial methods to the decision making in non-life companies will more and more relate to the utility concept as was proposed by K. Borch [1] about fifteen years ago. In this connection it will be important to have workable numerical methods. The calculation of the distribution function of the profit is an unavoidable problem from a practical point of view. Even if it is possible to compute this function today accurately with computers by using the ingenious technique developed by H. Bohman [2], integrals become very laborious when applied to the decision making procedure based on utility concepts. This paper intends to show that the NP-technique,—proposed for the first time by L. Kauppi and P. Ojantakanen in actuarial science [3]—, is particularly suitable in integrals needed for utility calculations.LetF(x) be the distribution function of the total amount of claims and let its mean, standard deviation, skewness and kurtosis be respectively m, σ, γ1 and γ2. The NP-technique uses the system of equationswhere Φ(y) is the standardized normal distribution function. If the parameters m, σ, γ1 and γ2 and F(x) are known, y is directly found from the tables of the normal distribution function, and thereafter the second equation directly gives the value of x. If, vice versa, x and the above parameters are known, F(x) is obtained by solving y from the second equation (1), or, more practically, by using the converted NP-expansion instead of (i), i.e. [4]:where Sometimes it is sufficient to use short forms of the formulae (1) and (2), obtained by omitting the terms in the brackets. If these rougher approximations are used, the estimation of the kurtosis γ2 remains unnecessary.


1980 ◽  
Vol 11 (1) ◽  
pp. 52-60 ◽  
Author(s):  
Hans Bühlmann

(a) The notion of premium calculation principle has become fairly generally accepted in the risk theory literature. For completeness we repeat its definition:A premium calculation principle is a functional assigning to a random variable X (or its distribution function Fx(x)) a real number P. In symbolsThe interpretation is rather obvious. The random variable X stands for the possible claims of a risk whereas P is the premium charged for assuming this risk.This is of course formalizing the way actuaries think about premiums. In actuarial terms, the premium is a property of the risk (and nothing else), e.g.(b) Of course, in economics premiums are not only depending on the risk but also on market conditions. Let us assume for a moment that we can describe the risk by a random variable X (as under a)), describe the market conditions by a random variable Z.Then we want to show how an economic premium principlecan be constructed. During the development of the paper we will also give a clear meaning to the random variable Z:In the market we are considering agents i = 1, 2, …, n. They constitute buyers of insurance, insurance companies, reinsurance companies.Each agent i is characterized by hisutility function ui(x) [as usual: ]initial wealth wi.In this section, the risk aspect is modelled by a finite (for simplicity) probability space with states s = 1, 2, …, S and probabilities πs of state s happening.


Author(s):  
Robert E. Ogilvie

The search for an empirical absorption equation begins with the work of Siegbahn (1) in 1914. At that time Siegbahn showed that the value of (μ/ρ) for a given element could be expressed as a function of the wavelength (λ) of the x-ray photon by the following equationwhere C is a constant for a given material, which will have sudden jumps in value at critial absorption limits. Siegbahn found that n varied from 2.66 to 2.71 for various solids, and from 2.66 to 2.94 for various gases.Bragg and Pierce (2) , at this same time period, showed that their results on materials ranging from Al(13) to Au(79) could be represented by the followingwhere μa is the atomic absorption coefficient, Z the atomic number. Today equation (2) is known as the “Bragg-Pierce” Law. The exponent of 5/2(n) was questioned by many investigators, and that n should be closer to 3. The work of Wingardh (3) showed that the exponent of Z should be much lower, p = 2.95, however, this is much lower than that found by most investigators.


2017 ◽  
Vol 920 (2) ◽  
pp. 57-60
Author(s):  
F.E. Guliyeva

The study of results of relevant works on remote sensing of forests has shown that the known methods of remote estimation of forest cuts and growth don’t allow to calculate the objective average value of forests cut volume during the fixed time period. The existing mathematical estimates are not monotonous and make it possible to estimate primitively the scale of cutting by computing the ratio of data in two fixed time points. In the article the extreme properties of the considered estimates for deforestation and reforestation models are researched. The extreme features of integrated averaged values of given estimates upon limitations applied on variables, characterizing the deforestation and reforestation processes are studied. The integrated parameter, making it possible to calculate the averaged value of estimates of forest cutting, computed for all fixed time period with a fixed step is suggested. It is shown mathematically that the given estimate has a monotonous feature in regard of value of given time interval and make it possible to evaluate objectively the scales of forest cutting.


Symmetry ◽  
2021 ◽  
Vol 13 (5) ◽  
pp. 815
Author(s):  
Christopher Adcock

A recent paper presents an extension of the skew-normal distribution which is a copula. Under this model, the standardized marginal distributions are standard normal. The copula itself depends on the familiar skewing construction based on the normal distribution function. This paper is concerned with two topics. First, the paper presents a number of extensions of the skew-normal copula. Notably these include a case in which the standardized marginal distributions are Student’s t, with different degrees of freedom allowed for each margin. In this case the skewing function need not be the distribution function for Student’s t, but can depend on certain of the special functions. Secondly, several multivariate versions of the skew-normal copula model are presented. The paper contains several illustrative examples.


2014 ◽  
Vol 2014 ◽  
pp. 1-6
Author(s):  
Meng Fei ◽  
Wu Li-chun ◽  
Zhang Jia-sheng ◽  
Deng Guo-dong ◽  
Ni Zhi-hui

In order to calculate the ground movement induced by displacement piles driven into horizontal layered strata, an axisymmetric model was built and then the vertical and horizontal ground movement functions were deduced using stochastic medium theory. Results show that the vertical ground movement obeys normal distribution function, while the horizontal ground movement is an exponential function. Utilizing field measured data, parameters of these functions can be obtained by back analysis, and an example was employed to verify this model. Result shows that stochastic medium theory is suitable for calculating the ground movement in pile driving, and there is no need to consider the constitutive model of soil or contact between pile and soil. This method is applicable in practice.


Sign in / Sign up

Export Citation Format

Share Document