Annals of Actuarial Science
Latest Publications


TOTAL DOCUMENTS

365
(FIVE YEARS 68)

H-INDEX

16
(FIVE YEARS 3)

Published By Cambridge University Press

1748-5002, 1748-4995

2021 ◽  
pp. 1-30
Author(s):  
Yu Fu ◽  
Michael Sherris ◽  
Mengyi Xu

Abstract China and the US are two contrasting countries in terms of functional disability and long-term care. China is experiencing declining family support for long-term care and developing private long-term care insurance. The US has a more developed public aged care system and private long-term care insurance market than China. Changes in the demand for long-term care are driven by the levels, trends and uncertainty in mortality and functional disability. To understand the future potential demand for long-term care, we compare mortality and functional disability experiences in China and the US, using a multi-state latent factor intensity model with time trends and systematic uncertainty in transition rates. We estimate the model with the Chinese Longitudinal Healthy Longevity Survey (CLHLS) and the US Health and Retirement Study (HRS) data. The estimation results show that if trends continue, both countries will experience longevity improvement with morbidity compression and a declining proportion of the older population with functional disability. Although the elderly Chinese have a shorter estimated life expectancy, they are expected to spend a smaller proportion of their future lifetime functionally disabled than the elderly Americans. Systematic uncertainty is shown to be significant in future trends in disability rates and our model estimates higher uncertainty in trends for the Chinese elderly, especially for urban residents.


2021 ◽  
Vol 15 (3) ◽  
pp. 485-487
Author(s):  
Chris Dolman ◽  
Edward (Jed) Frees ◽  
Fei Huang

2021 ◽  
pp. 1-27
Author(s):  
Hanbali Hamza

Abstract This paper investigates the benefits of incorporating diversification effects into the pricing process of insurance policies from two different business lines. The paper shows that, for the same risk reduction, insurers pricing policies jointly can have a competitive advantage over those pricing them separately. However, the choice of competitiveness constrains the underwriting flexibility of joint pricers. The paper goes a step further by modelling explicitly the relationship between premiums and the number of customers in each line. Using the total collected premiums as a criterion to compare the competing strategies, the paper provides conditions for the optimal pricing decision based on policyholders’ sensitivity to price discounts. The results are illustrated for a portfolio of annuities and assurances. Further, using non-life data from the Brazilian insurance market, an empirical exploration shows that most pairs satisfy the condition for being priced jointly, even when pairwise correlations are high.


2021 ◽  
pp. 1-27
Author(s):  
Michel Denuit ◽  
Christian Y. Robert

Abstract Conditional mean risk sharing appears to be effective to distribute total losses amongst participants within an insurance pool. This paper develops analytical results for this allocation rule in the individual risk model with dependence induced by the respective position within a graph. Precisely, losses are modelled by zero-augmented random variables whose joint occurrence distribution and individual claim amount distributions are based on network structures and can be characterised by graphical models. The Ising model is adopted for occurrences and loss amounts obey decomposable graphical models that are specific to each participant. Two graphical structures are thus used: the first one to describe the contagion amongst member units within the insurance pool and the second one to model the spread of losses inside each participating unit. The proposed individual risk model is typically useful for modelling operational risks, catastrophic risks or cybersecurity risks.


2021 ◽  
pp. 1-32
Author(s):  
Ioannis Badounas ◽  
Apostolos Bozikas ◽  
Georgios Pitselis

Abstract It is well known that the presence of outliers can mis-estimate (underestimate or overestimate) the overall reserve in the chain-ladder method, when we consider a linear regression model, based on the assumption that the coefficients are fixed and identical from one observation to another. By relaxing the usual regression assumptions and applying a regression with randomly varying coefficients, we have a similar phenomenon, i.e., mis-estimation of the overall reserves. The lack of robustness of loss reserving regression with random coefficients on incremental payment estimators leads to the development of this paper, aiming to apply robust statistical procedures to the loss reserving estimation when regression coefficients are random. Numerical results of the proposed method are illustrated and compared with the results that were obtained by linear regression with fixed coefficients.


2021 ◽  
pp. 1-29
Author(s):  
Douglas Andrews ◽  
Stephen Bonnar ◽  
Lori J. Curtis ◽  
Jaideep S. Oberoi ◽  
Aniketh Pittea ◽  
...  

Abstract We examine the impact of asset allocation and contribution rates on the risk of defined benefit (DB) pension schemes, using both a run-off and a shorter 3-year time horizon. Using the 3-year horizon, which is typically preferred by regulators, a high bond allocation reduces the spread of the distribution of surplus. However, this result is reversed when examined on a run-off basis. Furthermore, under both the 3-year horizon and the run-off, the higher bond allocation reduces the median level of surplus. Pressure on the affordability of DB schemes has led to widespread implementation of the so-called de-risking strategies, such as moving away from predominantly equity investments to greater bond investments. If the incentives produced by shorter term risk assessments are contributing to this shift, they might be harming the long-term financial health of the schemes. Contribution rates have relatively lower impact on the risk.


2021 ◽  
pp. 1-28
Author(s):  
Nhan Huynh ◽  
Mike Ludkovski

Abstract We investigate joint modelling of longevity trends using the spatial statistical framework of Gaussian process (GP) regression. Our analysis is motivated by the Human Mortality Database (HMD) that provides unified raw mortality tables for nearly 40 countries. Yet few stochastic models exist for handling more than two populations at a time. To bridge this gap, we leverage a spatial covariance framework from machine learning that treats populations as distinct levels of a factor covariate, explicitly capturing the cross-population dependence. The proposed multi-output GP models straightforwardly scale up to a dozen populations and moreover intrinsically generate coherent joint longevity scenarios. In our numerous case studies, we investigate predictive gains from aggregating mortality experience across nations and genders, including by borrowing the most recently available “foreign” data. We show that in our approach, information fusion leads to more precise (and statistically more credible) forecasts. We implement our models in R, as well as a Bayesian version in Stan that provides further uncertainty quantification regarding the estimated mortality covariance structure. All examples utilise public HMD datasets.


2021 ◽  
pp. 1-26
Author(s):  
Silvana M. Pesenti ◽  
Alberto Bettini ◽  
Pietro Millossovich ◽  
Andreas Tsanakas

Abstract The Scenario Weights for Importance Measurement (SWIM) package implements a flexible sensitivity analysis framework, based primarily on results and tools developed by Pesenti et al. (2019). SWIM provides a stressed version of a stochastic model, subject to model components (random variables) fulfilling given probabilistic constraints (stresses). Possible stresses can be applied on moments, probabilities of given events, and risk measures such as Value-At-Risk and Expected Shortfall. SWIM operates upon a single set of simulated scenarios from a stochastic model, returning scenario weights, which encode the required stress and allow monitoring the impact of the stress on all model components. The scenario weights are calculated to minimise the relative entropy with respect to the baseline model, subject to the stress applied. As well as calculating scenario weights, the package provides tools for the analysis of stressed models, including plotting facilities and evaluation of sensitivity measures. SWIM does not require additional evaluations of the simulation model or explicit knowledge of its underlying statistical and functional relations; hence, it is suitable for the analysis of black box models. The capabilities of SWIM are demonstrated through a case study of a credit portfolio model.


2021 ◽  
pp. 1-27
Author(s):  
Gareth W. Peters ◽  
Hongxuan Yan ◽  
Jennifer Chan

Abstract Understanding core statistical properties and data features in mortality data are fundamental to the development of machine learning methods for demographic and actuarial applications of mortality projection. The study of statistical features in such data forms the basis for classification, regression and forecasting tasks. In particular, the understanding of key statistical structure in such data can aid in improving accuracy in undertaking mortality projection and forecasting when constructing life tables. The ability to accurately forecast mortality is a critical aspect for the study of demography, life insurance product design and pricing, pension planning and insurance-based decision risk management. Though many stylised facts of mortality data have been discussed in the literature, we provide evidence for a novel statistical feature that is pervasive in mortality data at a national level that is as yet unexplored. In this regard, we demonstrate in this work a strong evidence for the existence of long memory features in mortality data, and second that such long memory structures display multifractality as a statistical feature that can act as a discriminator of mortality dynamics by age, gender and country. To achieve this, we first outline the way in which we choose to represent the persistence of long memory from an estimator perspective. We make a natural link between a class of long memory features and an attribute of stochastic processes based on fractional Brownian motion. This allows us to use well established estimators for the Hurst exponent to then robustly and accurately study the long memory features of mortality data. We then introduce to mortality analysis the notion from data science known as multifractality. This allows us to study the long memory persistence features of mortality data on different timescales. We demonstrate its accuracy for sample sizes commensurate with national-level age term structure historical mortality records. A series of synthetic studies as well a comprehensive analysis of real mortality death count data are studied in order to demonstrate the pervasiveness of long memory structures in mortality data, both mono-fractal and multifractal functional features are verified to be present as stylised facts of national-level mortality data for most countries and most age groups by gender. We conclude by demonstrating how such features can be used in kernel clustering and mortality model forecasting to improve these actuarial applications.


Sign in / Sign up

Export Citation Format

Share Document