COMPOUND POISSON CLAIMS RESERVING MODELS: EXTENSIONS AND INFERENCE

2018 ◽  
Vol 48 (3) ◽  
pp. 1137-1156 ◽  
Author(s):  
Shengwang Meng ◽  
Guangyuan Gao

AbstractWe consider compound Poisson claims reserving models applied to the paid claims and to the number of payments run-off triangles. We extend the standard Poisson-gamma assumption to account for over-dispersion in the payment counts and to account for various mean and variance structures in the individual payments. Two generalized linear models are applied consecutively to predict the unpaid claims. A bootstrap is used to estimate the mean squared error of prediction and to simulate the predictive distribution of the unpaid claims. We show that the extended compound Poisson models make reasonable predictions of the unpaid claims.

2019 ◽  
Vol 14 (1) ◽  
pp. 93-128 ◽  
Author(s):  
Mathias Lindholm ◽  
Filip Lindskog ◽  
Felix Wahl

AbstractThis paper studies estimation of the conditional mean squared error of prediction, conditional on what is known at the time of prediction. The particular problem considered is the assessment of actuarial reserving methods given data in the form of run-off triangles (trapezoids), where the use of prediction assessment based on out-of-sample performance is not an option. The prediction assessment principle advocated here can be viewed as a generalisation of Akaike’s final prediction error. A direct application of this simple principle in the setting of a data-generating process given in terms of a sequence of general linear models yields an estimator of the conditional mean squared error of prediction that can be computed explicitly for a wide range of models within this model class. Mack’s distribution-free chain ladder model and the corresponding estimator of the prediction error for the ultimate claim amount are shown to be a special case. It is demonstrated that the prediction assessment principle easily applies to quite different data-generating processes and results in estimators that have been studied in the literature.


2019 ◽  
Vol 49 (03) ◽  
pp. 763-786 ◽  
Author(s):  
Patrizia Gigante ◽  
Liviana Picech ◽  
Luciano Sigalotti

AbstractClaims reserving models are usually based on data recorded in run-off tables, according to the origin and the development years of the payments. The amounts on the same diagonal are paid in the same calendar year and are influenced by some common effects, for example, claims inflation, that can induce dependence among payments. We introduce hierarchical generalized linear models (HGLM) with risk parameters related to the origin and the calendar years, in order to model the dependence among payments of both the same origin year and the same calendar year. Besides the random effects, the linear predictor also includes fixed effects. All the parameters are estimated within the model by the h-likelihood approach. The prediction for the outstanding claims and an approximate formula to evaluate the mean square error of prediction are obtained. Moreover, a parametric bootstrap procedure is delineated to get an estimate of the predictive distribution of the outstanding claims. A Poisson-gamma HGLM with origin and calendar year effects is studied extensively and a numerical example is provided. We find that the estimates of the correlations can be significant for payments in the same calendar year and that the inclusion of calendar effects can determine a remarkable impact on the prediction uncertainty.


1990 ◽  
Vol 15 (1) ◽  
pp. 9-38 ◽  
Author(s):  
Albert E. Beaton ◽  
Eugene G. Johnson

The average response method (ARM) of scaling nonbinary data was developed to scale the data from the assessments of writing conducted by the National Assessment of Educational Progress (NAEP). The ARM applies linear models and multiple imputations technologies to characterize the predictive distribution of the person-level average of ratings over a pool of exercises when each person has responded to only a few of the exercises. The derivations of “plausible values” from the individual-level distributions of potential scale scores are given. Conditions are provided for the unbiasedness of estimates based on the plausible values, and the potential magnitude of the bias when the conditions are not met is indicated. Also discussed is how the plausible values allow for an accounting of the uncertainties due to the sampling of individuals and to the incomplete information on each sampled individual. The technique is illustrated using data from the assessment of writing.


2019 ◽  
Author(s):  
Mohammadreza Bahadorian ◽  
Christoph Zechner ◽  
Carl Modes

Many systems in biology and beyond employ collaborative, collective communication strategies for improved efficiency and adaptive benefit. One such paradigm of particular interest is the community estimation of a dynamic signal, when, for example, an epithelial tissue of cells must decide whether to react to a given dynamic external concentration of stress signaling molecules. At the level of dynamic cellular communication, however, it remains unknown what effect, if any, arises from communication beyond the mean field level. What are the limits and benefits to communication across a network of neighbor interactions? What is the role of Poissonian vs. super Poissonian dynamics in such a setting? How does the particular topology of connections impact the collective estimation and that of the individual participating cells? In this letter we construct a robust and general framework of signal estimation over continuous time Markov chains in order to address and answer these questions. Our results show that in the case of Possonian estimators, the communication solely enhances convergence speed of the Mean Squared Error (MSE) of the estimators to their steady-state values while leaving these values unchanged. However, in the super-Poissonian regime, MSE of estimators significantly decreases by increasing the number of neighbors. Surprisingly, in this case, the clustering coefficient of an estimator does not enhance its MSE while reducing total MSE of the population.


2020 ◽  
Author(s):  
Jon Saenz ◽  
Sheila Carreno-Madinabeitia ◽  
Ganix Esnaola ◽  
Santos J. González-Rojí ◽  
Gabriel Ibarra-Berastegi ◽  
...  

<p align="justify">A new diagram is proposed for the verification of vector quantities generated by individual or multiple models against a set of observations. It has been designed with the idea of extending the Taylor diagram to two-dimensional vector such as currents, wind velocity, or horizontal fluxes of water vapour, salinity, energy and other geophysical variables. The diagram is based on <span>a principal component</span> analysis of the two-dimensional structure of the mean squared error matrix between model and observations. This matrix is separated in two parts corresponding to the bias and the relative rotation of the empirical orthogonal functions of the data. We test the performance of this new diagram identifying the differences amongst <span>a</span> reference dataset and different model outputs using examples wind velocities, current, vertically integrated moisture transport and wave energy flux time series. An alternative setup is also <span>proposed</span> with an application to the time-averaged spatial field of surface wind velocity in the Northern and Southern Hemispheres according to different reanalyses and realizations of an ensemble of CMIP5 models. The examples of the use of the Sailor diagram show that it is a tool which helps identifying errors due to the bias or the orientation of the simulated vector time series or fields. An implementation of the algorithm in form of an R package (sailoR) is already publicly available from the CRAN repository, and besides the ability to plot the individual components of the error matrix, functions in the package also allow to easily retrieve the individual components of the mean squared error.</p>


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Beth Ann Griffin ◽  
Megan S. Schuler ◽  
Elizabeth A. Stuart ◽  
Stephen Patrick ◽  
Elizabeth McNeer ◽  
...  

Abstract Background Reliable evaluations of state-level policies are essential for identifying effective policies and informing policymakers’ decisions. State-level policy evaluations commonly use a difference-in-differences (DID) study design; yet within this framework, statistical model specification varies notably across studies. More guidance is needed about which set of statistical models perform best when estimating how state-level policies affect outcomes. Methods Motivated by applied state-level opioid policy evaluations, we implemented an extensive simulation study to compare the statistical performance of multiple variations of the two-way fixed effect models traditionally used for DID under a range of simulation conditions. We also explored the performance of autoregressive (AR) and GEE models. We simulated policy effects on annual state-level opioid mortality rates and assessed statistical performance using various metrics, including directional bias, magnitude bias, and root mean squared error. We also reported Type I error rates and the rate of correctly rejecting the null hypothesis (e.g., power), given the prevalence of frequentist null hypothesis significance testing in the applied literature. Results Most linear models resulted in minimal bias. However, non-linear models and population-weighted versions of classic linear two-way fixed effect and linear GEE models yielded considerable bias (60 to 160%). Further, root mean square error was minimized by linear AR models when we examined crude mortality rates and by negative binomial models when we examined raw death counts. In the context of frequentist hypothesis testing, many models yielded high Type I error rates and very low rates of correctly rejecting the null hypothesis (< 10%), raising concerns of spurious conclusions about policy effectiveness in the opioid literature. When considering performance across models, the linear AR models were optimal in terms of directional bias, root mean squared error, Type I error, and correct rejection rates. Conclusions The findings highlight notable limitations of commonly used statistical models for DID designs, which are widely used in opioid policy studies and in state policy evaluations more broadly. In contrast, the optimal model we identified--the AR model--is rarely used in state policy evaluation. We urge applied researchers to move beyond the classic DID paradigm and adopt use of AR models.


2003 ◽  
Vol 33 (02) ◽  
pp. 331-346 ◽  
Author(s):  
Mario V. Wüthrich

We consider the problem of claims reserving and estimating run-off triangles. We generalize the gamma cell distributions model which leads to Tweedie's compound Poisson model. Choosing a suitable parametrization, we estimate the parameters of our model within the framework of generalized linear models (see Jørgensen-de Souza [2] and Smyth-Jørgensen [8]). We show that these methods lead to reasonable estimates of the outstanding loss liabilities.


Author(s):  
Wael Abdelrahman ◽  
Saeid Nahavandi ◽  
Douglas Creighton ◽  
Matthias Harders

This study represents a preliminary step towards data-driven computation of contact dynamics during manipulation of deformable objects at two points of contact. A modeling approach is proposed that characterizes the individual interaction at both points and the mutual effects of the two interactions on each other via a set of parameters. Both global as well as local coordinate systems are tested for encoding the contact mechanics. Artificial neural networks are trained on simulated data to capture the object behavior. A comparison of test data with the output of the trained system reveals a mean squared error percentage between 1% and 3% for simple interactions.


Sign in / Sign up

Export Citation Format

Share Document