A credibility method for profitable cross-selling of insurance products

2011 ◽  
Vol 6 (1) ◽  
pp. 65-75 ◽  
Author(s):  
Fredrik Thuring

AbstractA method is presented for identifying an expected profitable set of customers, to offer them an additional insurance product, by estimating a customer specific latent risk profile, for the additional product, by using the customer specific available data for an existing insurance product of the specific customer. For the purpose, a multivariate credibility estimator is considered and we investigate the effect of assuming that one (of two) insurance products is inactive (without available claims information) when estimating the latent risk profile. Instead, available customer specific claims information from the active existing insurance product is used to estimate the risk profile and thereafter assess whether or not to include a specific customer in an expected profitable set of customers. The method is tested using a large real data set from a Danish insurance company and it is shown that sets of customers, with up to 36% less claims than a priori expected, are produced as a result of the method. It is therefore argued that the proposed method could be considered, by an insurance company, when cross-selling insurance products to existing customers.

Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


2021 ◽  
Vol 14 (1) ◽  
pp. 71-88
Author(s):  
Adane Nega Tarekegn ◽  
Tamir Anteneh Alemu ◽  
Alemu Kumlachew Tegegne

Tuberculosis (TB) remains a global health concern. It commonly spreads through the air and attacks low immune bodies. TB is the most common and known health problem in low and middle-income countries. Genetic programming (GP) is a machine learning model for discovering useful relationships among the variables in complex clinical data. It is more appropriate in a circumstance when the form of the solution model is unknown a priori. The main objective of this study is to develop a model that can detect positive cases of TB suspected patients using genetic programming approach. In this paper, Genetic Programming (GP) is exploited to identify the presence of positive cases of tuberculosis from the real data set of TB suspects and hospitalized patients. First, the dataset is pre-processed, and target variables are identified using cluster analysis. This data-driven cluster analysis identifies two distinct clusters of patients, representing TB positive and TB negative. Then, GP is trained using the training datasets to construct a prediction model and tested with a separate new dataset. With the 30 runs, the median performance of GP on test data was good (sensitivity=0.78, specificity=0.95, accuracy=0.89, AUC=0.91). We find that GP shows better performance in predicting TB compared to other machine learning models. The study demonstrates that the GP model might be used to support clinicians to screen TB patients.


2019 ◽  
Vol 486 (2) ◽  
pp. 2116-2128 ◽  
Author(s):  
Yvette C Perrott ◽  
Kamran Javid ◽  
Pedro Carvalho ◽  
Patrick J Elwood ◽  
Michael P Hobson ◽  
...  

ABSTRACT We develop a Bayesian method of analysing Sunyaev–Zel’dovich measurements of galaxy clusters obtained from the Arcminute Microkelvin Imager (AMI) radio interferometer system and from the Planck satellite, using a joint likelihood function for the data from both instruments. Our method is applicable to any combination of Planck data with interferometric data from one or more arrays. We apply the analysis to simulated clusters and find that when the cluster pressure profile is known a priori, the joint data set provides precise and accurate constraints on the cluster parameters, removing the need for external information to reduce the parameter degeneracy. When the pressure profile deviates from that assumed for the fit, the constraints become biased. Allowing the pressure profile shape parameters to vary in the analysis allows an unbiased recovery of the integrated cluster signal and produces constraints on some shape parameters, depending on the angular size of the cluster. When applied to real data from Planck-detected cluster PSZ2 G063.80+11.42, our method resolves the discrepancy between the AMI and Planck Y-estimates and usefully constrains the gas pressure profile shape parameters at intermediate and large radii.


Geophysics ◽  
1991 ◽  
Vol 56 (6) ◽  
pp. 785-794 ◽  
Author(s):  
S. Spitz

Interpolation of seismic traces is an effective means of improving migration when the data set exhibits spatial aliasing. A major difficulty of standard interpolation methods is that they depend on the degree of reliability with which the various geological events can be separated. In this respect, a multichannel interpolation method is described which requires neither a priori knowledge of the directions of lateral coherence of the events, nor estimation of these directions. The method is based on the fact that linear events present in a section made of equally spaced traces may be interpolated exactly, regardless of the original spatial interval, without any attempt to determine their true dips. The predictability of linear events in the f-x domain allows the missing traces to be expressed as the output of a linear system, the input of which consists of the recorded traces. The interpolation operator is obtained by solving a set of linear equations whose coefficients depend only on the spectrum of the spatial prediction filter defined by the recorded traces. Synthetic examples show that this method is insensitive to random noise and that it correctly handles curvatures and lateral amplitude variations. Assessment of the method with a real data set shows that the interpolation yields an improved migrated section.


Geophysics ◽  
2007 ◽  
Vol 72 (1) ◽  
pp. S11-S18 ◽  
Author(s):  
Juefu Wang ◽  
Mauricio D. Sacchi

We propose a new scheme for high-resolution amplitude-variation-with-ray-parameter (AVP) imaging that uses nonquadratic regularization. We pose migration as an inverse problem and propose a cost function that uses a priori information about common-image gathers (CIGs). In particular, we introduce two regularization constraints: smoothness along the offset-ray-parameter axis and sparseness in depth. The two-step regularization yields high-resolution CIGs with robust estimates of AVP. We use an iterative reweighted least-squares conjugate gradient algorithm to minimize the cost function of the problem. We test the algorithm with synthetic data (a wedge model and the Marmousi data set) and a real data set (Erskine area, Alberta). Tests show our method helps to enhance the vertical resolution of CIGs and improves amplitude accuracy along the ray-parameter direction.


2010 ◽  
Vol 40 (1) ◽  
pp. 151-177 ◽  
Author(s):  
Katrien Antonio ◽  
Edward W. Frees ◽  
Emiliano A. Valdez

AbstractIt is common for professional associations and regulators to combine the claims experience of several insurers into a database known as an “intercompany” experience data set. In this paper, we analyze data on claim counts provided by the General Insurance Association of Singapore, an organization consisting of most of the general insurers in Singapore. Our data comes from the financial records of automobile insurance policies followed over a period of nine years. Because the source contains a pooled experience of several insurers, we are able to study company effects on claim behavior, an area that has not been systematically addressed in either the insurance or the actuarial literatures.We analyze this intercompany experience using multilevel models. The multilevel nature of the data is due to: a vehicle is observed over a period of years and is insured by an insurance company under a “fleet” policy. Fleet policies are umbrella-type policies issued to customers whose insurance covers more than a single vehicle. We investigate vehicle, fleet and company effects using various count distribution models (Poisson, negative binomial, zero-inflated and hurdle Poisson). The performance of these various models is compared; we demonstrate how our model can be used to update a priori premiums to a posteriori premiums, a common practice of experience-rated premium calculations. Through this formal model structure, we provide insights into effects that company-specific practice has on claims experience, even after controlling for vehicle and fleet effects.


2019 ◽  
Vol XVI (2) ◽  
pp. 1-11
Author(s):  
Farrukh Jamal ◽  
Hesham Mohammed Reyad ◽  
Soha Othman Ahmed ◽  
Muhammad Akbar Ali Shah ◽  
Emrah Altun

A new three-parameter continuous model called the exponentiated half-logistic Lomax distribution is introduced in this paper. Basic mathematical properties for the proposed model were investigated which include raw and incomplete moments, skewness, kurtosis, generating functions, Rényi entropy, Lorenz, Bonferroni and Zenga curves, probability weighted moment, stress strength model, order statistics, and record statistics. The model parameters were estimated by using the maximum likelihood criterion and the behaviours of these estimates were examined by conducting a simulation study. The applicability of the new model is illustrated by applying it on a real data set.


Author(s):  
Parisa Torkaman

The generalized inverted exponential distribution is introduced as a lifetime model with good statistical properties. This paper, the estimation of the probability density function and the cumulative distribution function of with five different estimation methods: uniformly minimum variance unbiased(UMVU), maximum likelihood(ML), least squares(LS), weighted least squares (WLS) and percentile(PC) estimators are considered. The performance of these estimation procedures, based on the mean squared error (MSE) by numerical simulations are compared. Simulation studies express that the UMVU estimator performs better than others and when the sample size is large enough the ML and UMVU estimators are almost equivalent and efficient than LS, WLS and PC. Finally, the result using a real data set are analyzed.


2019 ◽  
Vol 14 (2) ◽  
pp. 148-156
Author(s):  
Nighat Noureen ◽  
Sahar Fazal ◽  
Muhammad Abdul Qadir ◽  
Muhammad Tanvir Afzal

Background: Specific combinations of Histone Modifications (HMs) contributing towards histone code hypothesis lead to various biological functions. HMs combinations have been utilized by various studies to divide the genome into different regions. These study regions have been classified as chromatin states. Mostly Hidden Markov Model (HMM) based techniques have been utilized for this purpose. In case of chromatin studies, data from Next Generation Sequencing (NGS) platforms is being used. Chromatin states based on histone modification combinatorics are annotated by mapping them to functional regions of the genome. The number of states being predicted so far by the HMM tools have been justified biologically till now. Objective: The present study aimed at providing a computational scheme to identify the underlying hidden states in the data under consideration. </P><P> Methods: We proposed a computational scheme HCVS based on hierarchical clustering and visualization strategy in order to achieve the objective of study. Results: We tested our proposed scheme on a real data set of nine cell types comprising of nine chromatin marks. The approach successfully identified the state numbers for various possibilities. The results have been compared with one of the existing models as well which showed quite good correlation. Conclusion: The HCVS model not only helps in deciding the optimal state numbers for a particular data but it also justifies the results biologically thereby correlating the computational and biological aspects.


2021 ◽  
Vol 13 (9) ◽  
pp. 1703
Author(s):  
He Yan ◽  
Chao Chen ◽  
Guodong Jin ◽  
Jindong Zhang ◽  
Xudong Wang ◽  
...  

The traditional method of constant false-alarm rate detection is based on the assumption of an echo statistical model. The target recognition accuracy rate and the high false-alarm rate under the background of sea clutter and other interferences are very low. Therefore, computer vision technology is widely discussed to improve the detection performance. However, the majority of studies have focused on the synthetic aperture radar because of its high resolution. For the defense radar, the detection performance is not satisfactory because of its low resolution. To this end, we herein propose a novel target detection method for the coastal defense radar based on faster region-based convolutional neural network (Faster R-CNN). The main processing steps are as follows: (1) the Faster R-CNN is selected as the sea-surface target detector because of its high target detection accuracy; (2) a modified Faster R-CNN based on the characteristics of sparsity and small target size in the data set is employed; and (3) soft non-maximum suppression is exploited to eliminate the possible overlapped detection boxes. Furthermore, detailed comparative experiments based on a real data set of coastal defense radar are performed. The mean average precision of the proposed method is improved by 10.86% compared with that of the original Faster R-CNN.


Sign in / Sign up

Export Citation Format

Share Document