Astin Bulletin
Latest Publications


TOTAL DOCUMENTS

1995
(FIVE YEARS 123)

H-INDEX

58
(FIVE YEARS 4)

Published By Cambridge University Press

0515-0361, 0515-0361

2022 ◽  
pp. 1-24
Author(s):  
Pengcheng Zhang ◽  
David Pitt ◽  
Xueyuan Wu

Abstract The fact that a large proportion of insurance policyholders make no claims during a one-year period highlights the importance of zero-inflated count models when analyzing the frequency of insurance claims. There is a vast literature focused on the univariate case of zero-inflated count models, while work in the area of multivariate models is considerably less advanced. Given that insurance companies write multiple lines of insurance business, where the claim counts on these lines of business are often correlated, there is a strong incentive to analyze multivariate claim count models. Motivated by the idea of Liu and Tian (Computational Statistics and Data Analysis, 83, 200–222; 2015), we develop a multivariate zero-inflated hurdle model to describe multivariate count data with extra zeros. This generalization offers more flexibility in modeling the behavior of individual claim counts while also incorporating a correlation structure between claim counts for different lines of insurance business. We develop an application of the expectation–maximization (EM) algorithm to enable the statistical inference necessary to estimate the parameters associated with our model. Our model is then applied to an automobile insurance portfolio from a major insurance company in Spain. We demonstrate that the model performance for the multivariate zero-inflated hurdle model is superior when compared to several alternatives.


2022 ◽  
pp. 1-32
Author(s):  
Martin Bladt

Abstract This paper addresses the task of modeling severity losses using segmentation when the data distribution does not fall into the usual regression frameworks. This situation is not uncommon in lines of business such as third-party liability insurance, where heavy-tails and multimodality often hamper a direct statistical analysis. We propose to use regression models based on phase-type distributions, regressing on their underlying inhomogeneous Markov intensity and using an extension of the expectation–maximization algorithm. These models are interpretable and tractable in terms of multistate processes and generalize the proportional hazards specification when the dimension of the state space is larger than 1. We show that the combination of matrix parameters, inhomogeneity transforms, and covariate information provides flexible regression models that effectively capture the entire distribution of loss severities.


2021 ◽  
pp. 1-29
Author(s):  
Shengwang Meng ◽  
He Wang ◽  
Yanlin Shi ◽  
Guangyuan Gao

Abstract Novel navigation applications provide a driving behavior score for each finished trip to promote safe driving, which is mainly based on experts’ domain knowledge. In this paper, with automobile insurance claims data and associated telematics car driving data, we propose a supervised driving risk scoring neural network model. This one-dimensional convolutional neural network takes time series of individual car driving trips as input and returns a risk score in the unit range of (0,1). By incorporating credibility average risk score of each driver, the classical Poisson generalized linear model for automobile insurance claims frequency prediction can be improved significantly. Hence, compared with non-telematics-based insurers, telematics-based insurers can discover more heterogeneity in their portfolio and attract safer drivers with premiums discounts.


2021 ◽  
pp. 1-41
Author(s):  
Jamaal Ahmad ◽  
Kristian Buchardt ◽  
Christian Furrer

Abstract We consider computation of market values of bonus payments in multi-state with-profit life insurance. The bonus scheme consists of additional benefits bought according to a dividend strategy that depends on the past realization of financial risk, the current individual insurance risk, the number of additional benefits currently held, and so-called portfolio-wide means describing the shape of the insurance business. We formulate numerical procedures that efficiently combine simulation of financial risk with classic methods for the outstanding insurance risk. Special attention is given to the case where the number of additional benefits bought only depends on the financial risk. Methods and results are illustrated via a numerical example.


2021 ◽  
pp. 1-23
Author(s):  
Tim J. Boonen ◽  
Wenjun Jiang

Abstract This paper studies the optimal insurance design from the perspective of an insured when there is possibility for the insurer to default on its promised indemnity. Default of the insurer leads to limited liability, and the promised indemnity is only partially recovered in case of a default. To alleviate the potential ex post moral hazard, an incentive compatibility condition is added to restrict the permissible indemnity function. Assuming that the premium is determined as a function of the expected coverage and under the mean–variance preference of the insured, we derive the explicit structure of the optimal indemnity function through the marginal indemnity function formulation of the problem. It is shown that the optimal indemnity function depends on the first and second order expectations of the random recovery rate conditioned on the realized insurable loss. The methodology and results in this article complement the literature regarding the optimal insurance subject to the default risk and provide new insights on problems of similar types.


2021 ◽  
pp. 1-35
Author(s):  
Karim Barigou ◽  
Valeria Bignozzi ◽  
Andreas Tsanakas

Abstract Current approaches to fair valuation in insurance often follow a two-step approach, combining quadratic hedging with application of a risk measure on the residual liability, to obtain a cost-of-capital margin. In such approaches, the preferences represented by the regulatory risk measure are not reflected in the hedging process. We address this issue by an alternative two-step hedging procedure, based on generalised regression arguments, which leads to portfolios that are neutral with respect to a risk measure, such as Value-at-Risk or the expectile. First, a portfolio of traded assets aimed at replicating the liability is determined by local quadratic hedging. Second, the residual liability is hedged using an alternative objective function. The risk margin is then defined as the cost of the capital required to hedge the residual liability. In the case quantile regression is used in the second step, yearly solvency constraints are naturally satisfied; furthermore, the portfolio is a risk minimiser among all hedging portfolios that satisfy such constraints. We present a neural network algorithm for the valuation and hedging of insurance liabilities based on a backward iterations scheme. The algorithm is fairly general and easily applicable, as it only requires simulated paths of risk drivers.


2021 ◽  
pp. 1-27
Author(s):  
Mathias Lindholm ◽  
Henning Zakrisson

Abstract The present paper introduces a simple aggregated reserving model based on claim count and payment dynamics, which allows for claim closings and re-openings. The modelling starts off from individual Poisson process claim dynamics in discrete time, keeping track of accident year, reporting year and payment delay. This modelling approach is closely related to the one underpinning the so-called double chain-ladder model, and it allows for producing separate reported but not settled and incurred but not reported reserves. Even though the introduction of claim closings and re-openings will produce new types of dependencies, it is possible to use flexible parametrisations in terms of, for example, generalised linear models (GLM) whose parameters can be estimated based on aggregated data using quasi-likelihood theory. Moreover, it is possible to obtain interpretable and explicit moment calculations, as well as having consistency of normalised reserves when the number of contracts tend to infinity. Further, by having access to simple analytic expressions for moments, it is computationally cheap to bootstrap the mean squared error of prediction for reserves. The performance of the model is illustrated using a flexible GLM parametrisation evaluated on non-trivial simulated claims data. This numerical illustration indicates a clear improvement compared with models not taking claim closings and re-openings into account. The results are also seen to be of comparable quality with machine learning models for aggregated data not taking claim openness into account.


2021 ◽  
pp. 1-28
Author(s):  
Simon Schnürch ◽  
Ralf Korn

Abstract The Lee–Carter model has become a benchmark in stochastic mortality modeling. However, its forecasting performance can be significantly improved upon by modern machine learning techniques. We propose a convolutional neural network (NN) architecture for mortality rate forecasting, empirically compare this model as well as other NN models to the Lee–Carter model and find that lower forecast errors are achievable for many countries in the Human Mortality Database. We provide details on the errors and forecasts of our model to make it more understandable and, thus, more trustworthy. As NN by default only yield point estimates, previous works applying them to mortality modeling have not investigated prediction uncertainty. We address this gap in the literature by implementing a bootstrapping-based technique and demonstrate that it yields highly reliable prediction intervals for our NN model.


2021 ◽  
pp. 1-43
Author(s):  
Dilan SriDaran ◽  
Michael Sherris ◽  
Andrés M. Villegas ◽  
Jonathan Ziveyi

Abstract Given the rapid reductions in human mortality observed over recent decades and the uncertainty associated with their future evolution, there have been a large number of mortality projection models proposed by actuaries and demographers in recent years. Many of these, however, suffer from being overly complex, thereby producing spurious forecasts, particularly over long horizons and for small, noisy data sets. In this paper, we exploit statistical learning tools, namely group regularisation and cross-validation, to provide a robust framework to construct discrete-time mortality models by automatically selecting the most appropriate functions to best describe and forecast particular data sets. Most importantly, this approach produces bespoke models using a trade-off between complexity (to draw as much insight as possible from limited data sets) and parsimony (to prevent over-fitting to noise), with this trade-off designed to have specific regard to the forecasting horizon of interest. This is illustrated using both empirical data from the Human Mortality Database and simulated data, using code that has been made available within a user-friendly open-source R package StMoMo.


2021 ◽  
pp. 1-26
Author(s):  
A. Nii-Armah Okine ◽  
Edward W. Frees ◽  
Peng Shi

Abstract Innon-life insurance, the payment history can be predictive of the timing of a settlement for individual claims. Ignoring the association between the payment process and the settlement process could bias the prediction of outstanding payments. To address this issue, we introduce into the literature of micro-level loss reserving a joint modeling framework that incorporates longitudinal payments of a claim into the intensity process of claim settlement. We discuss statistical inference and focus on the prediction aspects of the model. We demonstrate applications of the proposed model in the reserving practice with a detailed empirical analysis using data from a property insurance provider. The prediction results from an out-of-sample validation show that the joint model framework outperforms existing reserving models that ignore the payment–settlement association.


Sign in / Sign up

Export Citation Format

Share Document