scholarly journals Neural Networks for the Joint Development of Individual Payments and Claim Incurred

Risks ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. 33
Author(s):  
Łukasz Delong ◽  
Mario V. Wüthrich

The goal of this paper is to develop regression models and postulate distributions which can be used in practice to describe the joint development process of individual claim payments and claim incurred. We apply neural networks to estimate our regression models. As regressors we use the whole claim history of incremental payments and claim incurred, as well as any relevant feature information which is available to describe individual claims and their development characteristics. Our models are calibrated and tested on a real data set, and the results are benchmarked with the Chain-Ladder method. Our analysis focuses on the development of the so-called Reported But Not Settled (RBNS) claims. We show benefits of using deep neural network and the whole claim history in our prediction problem.

2019 ◽  
pp. 232102221886979
Author(s):  
Radhika Pandey ◽  
Amey Sapre ◽  
Pramod Sinha

Identification of primary economic activity of firms is a prerequisite for compiling several macro aggregates. In this paper, we take a statistical approach to understand the extent of changes in primary economic activity of firms over time and across different industries. We use the history of economic activity of over 46,000 firms spread over 25 years from CMIE Prowess to identify the number of times firms change the nature of their business. Using the count of changes, we estimate Poisson and Negative Binomial regression models to gain predictability over changing economic activity across industry groups. We show that a Poisson model accurately characterizes the distribution of count of changes across industries and that firms with a long history are more likely to have changed their primary economic activity over the years. Findings show that classification can be a crucial problem in a large data set like the MCA21 and can even lead to distortions in value addition estimates at the industry level. JEL Classifications: D22, E00, E01


Geophysics ◽  
1998 ◽  
Vol 63 (6) ◽  
pp. 2035-2041 ◽  
Author(s):  
Zhengping Liu ◽  
Jiaqi Liu

We present a data‐driven method of joint inversion of well‐log and seismic data, based on the power of adaptive mapping of artificial neural networks (ANNs). We use the ANN technique to find and approximate the inversion operator guided by the data set consisting of well data and seismic recordings near the wells. Then we directly map seismic recordings to well parameters, trace by trace, to extrapolate the wide‐band profiles of these parameters using the approximation operator. Compared to traditional inversions, which are based on a few prior theoretical operators, our inversion is novel because (1) it inverts for multiple parameters and (2) it is nonlinear with a high degree of complexity. We first test our algorithm with synthetic data and analyze its sensitivity and robustness. We then invert real data to obtain two extrapolation profiles of sonic log (DT) and shale content (SH), the latter a unique parameter of the inversion and significant for the detailed evaluation of stratigraphic traps. The high‐frequency components of the two profiles are significantly richer than those of the original seismic section.


Author(s):  
M Perzyk ◽  
R Biernacki ◽  
J Kozlowski

Determination of the most significant manufacturing process parameters using collected past data can be very helpful in solving important industrial problems, such as the detection of root causes of deteriorating product quality, the selection of the most efficient parameters to control the process, and the prediction of breakdowns of machines, equipment, etc. A methodology of determination of relative significances of process variables and possible interactions between them, based on interrogations of generalized regression models, is proposed and tested. The performance of several types of data mining tool, such as artificial neural networks, support vector machines, regression trees, classification trees, and a naïve Bayesian classifier, is compared. Also, some simple non-parametric statistical methods, based on an analysis of variance (ANOVA) and contingency tables, are evaluated for comparison purposes. The tests were performed using simulated data sets, with assumed hidden relationships, as well as on real data collected in the foundry industry. It was found that the performance of significance and interaction factors obtained from regression models, and, in particular, neural networks, is satisfactory, while the other methods appeared to be less accurate and/or less reliable.


2020 ◽  
Vol 4 (4) ◽  
pp. 227
Author(s):  
Kim-Hung Pho ◽  
Buu-Chau Truong

This paper compares the performance of the gradient and Newton-Raphson (N-R) method to estimate parameters in some zero-inflated (ZI) regression models such as the zero-inflated Poisson (ZIP) model, zero-inflated Bell (ZIBell) model, zero-inflated binomial (ZIB) model and zero-inflated negative binomial (ZINB) model. In the present work, firstly, we briefly present the approach of the gradient and N-R method. We then introduce the origin, formulas and applications of the ZI models. Finally, we compare the performance of two investigated approaches for these models through the simulation studies with numerous sample sizes and several missing rates. A real data set is investigated in this study. Specifically, we compare the results and the execution time of the R code for two methods. Moreover, we provide some important notes on these two approaches and some scalable research directions for future work.This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium provided the original work is properly cited.


2020 ◽  
Vol 12 (12) ◽  
pp. 14
Author(s):  
Afaf Antar Zohry ◽  
Mostafa Abdelghany Ahmed

The chain ladder method is the most widely used method of estimating claims reserves due to its simplicity and ease of application. It is very important to know the accuracy of the resulting estimates. Murphy presented a recursive model to estimate the standard error of claims reserves estimates, in line with the solvency ii requirements as a new regulatory framework adjusted according to risk, which requires the necessity to estimate the error and uncertainty of the claims reserving estimates. In Murphy's model, the mean square error (MSE) is analyzed into its components: variance and bias. In this paper, the recursive model of Murphy was used to estimate the prediction error in claims reserves estimates of General Accident & Miscellaneous Insurance in one of the Egyptian insurance companies.


2019 ◽  
Vol 488 (4) ◽  
pp. 5232-5250 ◽  
Author(s):  
Alexander Chaushev ◽  
Liam Raynard ◽  
Michael R Goad ◽  
Philipp Eigmüller ◽  
David J Armstrong ◽  
...  

ABSTRACT Vetting of exoplanet candidates in transit surveys is a manual process, which suffers from a large number of false positives and a lack of consistency. Previous work has shown that convolutional neural networks (CNN) provide an efficient solution to these problems. Here, we apply a CNN to classify planet candidates from the Next Generation Transit Survey (NGTS). For training data sets we compare both real data with injected planetary transits and fully simulated data, as well as how their different compositions affect network performance. We show that fewer hand labelled light curves can be utilized, while still achieving competitive results. With our best model, we achieve an area under the curve (AUC) score of $(95.6\pm {0.2}){{\ \rm per\ cent}}$ and an accuracy of $(88.5\pm {0.3}){{\ \rm per\ cent}}$ on our unseen test data, as well as $(76.5\pm {0.4}){{\ \rm per\ cent}}$ and $(74.6\pm {1.1}){{\ \rm per\ cent}}$ in comparison to our existing manual classifications. The neural network recovers 13 out of 14 confirmed planets observed by NGTS, with high probability. We use simulated data to show that the overall network performance is resilient to mislabelling of the training data set, a problem that might arise due to unidentified, low signal-to-noise transits. Using a CNN, the time required for vetting can be reduced by half, while still recovering the vast majority of manually flagged candidates. In addition, we identify many new candidates with high probabilities which were not flagged by human vetters.


2016 ◽  
Vol 5 (2) ◽  
pp. 51 ◽  
Author(s):  
Alexander K White ◽  
Samir K Safi

<p>We compare three forecasting methods, Artificial Neural Networks (ANNs), Autoregressive Integrated Moving Average (ARIMA) and Regression models. Using computer simulations, the major finding reveals that in the presence of autocorrelated errors ANNs perform favorably compared to ARIMA and regression for nonlinear models. The model accuracy for ANN is evaluated by comparing the simulated forecast results with the real data for unemployment in Palestine which were found to be in excellent agreement.</p>


2019 ◽  
Vol 15 (3) ◽  
pp. 46-62
Author(s):  
Canan Eren Atay ◽  
Georgia Garani

A data warehouse is considered a key aspect of success for any decision support system. Research on temporal databases have produced important results in this field, and data warehouses, which store historical data, can clearly benefit from such studies. A slowly changing dimension is a dimension in which any of its attributes in a data warehouse can change infrequently over time. Although different solutions have been proposed, each has its own particular disadvantages. The authors propose the Object-Relational Temporal Data Warehouse (O-RTDW) model for the slowly changing dimensions in this research work. Using this approach, it is possible to keep track of the whole history of an object in a data warehouse efficiently. The proposed model has been implemented on a real data set and tested successfully. Several limitations implied in other solutions, such as redundancy, surrogate keys, incomplete historical data, and creation of additional tables are not present in our solution.


2020 ◽  
Vol 16 ◽  
pp. 227-232
Author(s):  
Rafał Sieczka ◽  
Maciej Pańczyk

Acquiring data for neural network training is an expensive and labour-intensive task, especially when such data isdifficult to access. This article proposes the use of 3D Blender graphics software as a tool to automatically generatesynthetic image data on the example of price labels. Using the fastai library, price label classifiers were trained ona set of synthetic data, which were compared with classifiers trained on a real data set. The comparison of the resultsshowed that it is possible to use Blender to generate synthetic data. This allows for a significant acceleration of thedata acquisition process and consequently, the learning process of neural networks.


Author(s):  
Sven D. Schrinner ◽  
Rebecca Serra Mari ◽  
Jana Ebler ◽  
Mikko Rautiainen ◽  
Lancelot Seillier ◽  
...  

AbstractResolving genomes at haplotype level is crucial for understanding the evolutionary history of polyploid species and for designing advanced breeding strategies. As a highly complex computational problem, polyploid phasing still presents considerable challenges, especially in regions of collapsing haplotypes.We present WhatsHap polyphase, a novel two-stage approach that addresses these challenges by (i) clustering reads using a position-dependent scoring function and (ii) threading the haplotypes through the clusters by dynamic programming. We demonstrate on a simulated data set that this results in accurate haplotypes with switch error rates that are around three times lower than those obtainable by the current state-of-the-art and even around seven times lower in regions of collapsing haplotypes. Using a real data set comprising long and short read tetraploid potato sequencing data we show that WhatsHap polyphase is able to phase the majority of the potato genes after error correction, which enables the assembly of local genomic regions of interest at haplotype level. Our algorithm is implemented as part of the widely used open source tool WhatsHap and ready to be included in production settings.


Sign in / Sign up

Export Citation Format

Share Document