Computing Risk Measures of Life Insurance Policies through the Cox–Ross–Rubinstein Model

The problem of computing risk measures of life insurance policies is complicated by the fact that two different probability measures, the real-world probability measure along the risk horizon and the risk-neutral one along the remaining time interval, have to be used. This implies that a straightforward application of the Monte Carlo method is not available and the need arises to resort to time consuming nested simulations or to the least squares Monte Carlo approach. We propose to compute common risk measures by using the celebrated binomial model of Cox, Ross, and Rubinstein (1979) (CRR). The main advantage of this approach is that the usual construction of the CRR model is not influenced by the change of measure and a unique lattice can be used along the whole policy duration. Numerical results highlight that the proposed algorithm computes highly accurate values.

2002 ◽  
Vol 4 (3) ◽  
pp. 183-190 ◽  
Author(s):  
W. Hitzl ◽  
G. Grabner

The comparison of different methods of keratoprosthesis (KP) regarding their long-term success, as far as visual acuity is concerned, is difficult: this is the case both as a standardized reporting method agreed upon by all research groups has not been reported and far less accepted, and as the quality of life for the patient not only depends on the level of visual acuity, but also quite significantly on the “survival time” of the implant. Therefore, an analysis of a single series of patients with Osteo–Odonto–Keratoprosthesis (OOKP) was performed. Statistical analysis methods used by others in similar groups of surgical procedures have included descriptive statistics, survival analysis and ANOVA. These methods comprised comparisons of empirical densities or distribution functions and empirical survival curves. It is the objective of this paper to provide an inductive statistical method to avoid the problems with descriptive techniques and survival analysis. This statistical model meets four important standards: (1) the efficiency of a surgical technique can be assessed within an arbitrary time interval by a new index (VAT-index), (2) possible autocorrelations of the data are taken into consideration and (3) the efficiency is not only stated by a point estimator, but also 95% point-wise confidence limits are computed based on the Monte Carlo method, and finally, (4) the efficiency of a specific method is illustrated by line and range plots for quick illustration and can also be used for the comparison of different other surgical techniques such as refractive techniques, glaucoma and retinal surgery.


Energies ◽  
2020 ◽  
Vol 13 (7) ◽  
pp. 1552 ◽  
Author(s):  
Krzysztof Tomczyk

The solutions presented in this paper can be the basis for mutual comparison of different types of accelerometers produced by competing companies. An application of a procedure based on the Monte Carlo method to determine the maximum energy at the output of accelerometers is discussed here. The fixed-point algorithm controlled by the Monte Carlo method is used to determine this energy. This algorithm can only be used for the time-invariant and linear measurement systems. Hence, the accelerometer nonlinearities are not considered here. The mathematical models of the accelerometer and the special filter, represented by the relevant transfer functions, are the basis for the above procedure. Testing results of the voltage-mode accelerometer of type DJB A/1800/V are presented here as an example of an implementation of the solutions proposed. Calculation of the energy was executed in Mathcad 14 program with the built-in Programming Toolbar. The value of the maximum output energy determined for a specified time interval corresponds to the maximum integral-square error of the accelerometer. Such maximum energy can be a comparative ratio just like the accuracy class in the case of instruments used for the static measurements. Hence, the main analytical and technical contributions of this paper concern the development of theoretical procedures and the presentation of their application on the example of a real type of accelerometer.


2015 ◽  
Vol 22 (4) ◽  
pp. 601-619 ◽  
Author(s):  
Mohammad Arkani

AbstractIn this work, a fast 32-bit one-million-channel time interval spectrometer is proposed based onfield programmable gate arrays(FPGAs). The time resolution is adjustable down to 3.33 ns (=T, the digitization/discretization period) based on a prototype system hardware. The system is capable to collect billions of time interval data arranged in one million timing channels. This huge number of channels makes it an ideal measuring tool for very short to very long time intervals of nuclear particle detection systems. The data are stored and updated in a built-in SRAM memory during the measuring process, and then transferred to the computer. Twotime-to-digital converters(TDCs) working in parallel are implemented in the design to immune the system against loss of the first short time interval events (namely below 10 ns considering the tests performed on the prototype hardware platform of the system). Additionally, the theory of multiple count loss effect is investigated analytically. Using the Monte Carlo method, losses of counts up to 100million events per second(Meps) are calculated and the effective system dead time is estimated by curve fitting of a non-extendable dead time model to the results (τNE= 2.26 ns). An important dead time effect on a measured random process is the distortion on the time spectrum; using the Monte Carlo method this effect is also studied. The uncertainty of the system is analysed experimentally. The standard deviation of the system is estimated as ± 36.6 ×T(T= 3.33 ns) for a one-second time interval test signal (300 millionTin the time interval).


Crisis ◽  
2010 ◽  
Vol 31 (4) ◽  
pp. 217-223 ◽  
Author(s):  
Paul Yip ◽  
David Pitt ◽  
Yan Wang ◽  
Xueyuan Wu ◽  
Ray Watson ◽  
...  

Background: We study the impact of suicide-exclusion periods, common in life insurance policies in Australia, on suicide and accidental death rates for life-insured individuals. If a life-insured individual dies by suicide during the period of suicide exclusion, commonly 13 months, the sum insured is not paid. Aims: We examine whether a suicide-exclusion period affects the timing of suicides. We also analyze whether accidental deaths are more prevalent during the suicide-exclusion period as life-insured individuals disguise their death by suicide. We assess the relationship between the insured sum and suicidal death rates. Methods: Crude and age-standardized rates of suicide, accidental death, and overall death, split by duration since the insured first bought their insurance policy, were computed. Results: There were significantly fewer suicides and no significant spike in the number of accidental deaths in the exclusion period for Australian life insurance data. More suicides, however, were detected for the first 2 years after the exclusion period. Higher insured sums are associated with higher rates of suicide. Conclusions: Adverse selection in Australian life insurance is exacerbated by including a suicide-exclusion period. Extension of the suicide-exclusion period to 3 years may prevent some “insurance-induced” suicides – a rationale for this conclusion is given.


2020 ◽  
Vol 2020 (4) ◽  
pp. 25-32
Author(s):  
Viktor Zheltov ◽  
Viktor Chembaev

The article has considered the calculation of the unified glare rating (UGR) based on the luminance spatial-angular distribution (LSAD). The method of local estimations of the Monte Carlo method is proposed as a method for modeling LSAD. On the basis of LSAD, it becomes possible to evaluate the quality of lighting by many criteria, including the generally accepted UGR. UGR allows preliminary assessment of the level of comfort for performing a visual task in a lighting system. A new method of "pixel-by-pixel" calculation of UGR based on LSAD is proposed.


Author(s):  
V.A. Mironov ◽  
S.A. Peretokin ◽  
K.V. Simonov

The article is a continuation of the software research to perform probabilistic seismic hazard analysis (PSHA) as one of the main stages in engineering seismic surveys. The article provides an overview of modern software for PSHA based on the Monte Carlo method, describes in detail the work of foreign programs OpenQuake Engine and EqHaz. A test calculation of seismic hazard was carried out to compare the functionality of domestic and foreign software.


2019 ◽  
Vol 20 (12) ◽  
pp. 1151-1157 ◽  
Author(s):  
Alla P. Toropova ◽  
Andrey A. Toropov

Prediction of physicochemical and biochemical behavior of peptides is an important and attractive task of the modern natural sciences, since these substances have a key role in life processes. The Monte Carlo technique is a possible way to solve the above task. The Monte Carlo method is a tool with different applications relative to the study of peptides: (i) analysis of the 3D configurations (conformers); (ii) establishment of quantitative structure – property / activity relationships (QSPRs/QSARs); and (iii) development of databases on the biopolymers. Current ideas related to application of the Monte Carlo technique for studying peptides and biopolymers have been discussed in this review.


Sign in / Sign up

Export Citation Format

Share Document