scholarly journals The impact of uncertain precipitation data on insurance loss estimates using a flood catastrophe model

2014 ◽  
Vol 18 (6) ◽  
pp. 2305-2324 ◽  
Author(s):  
C. C. Sampson ◽  
T. J. Fewtrell ◽  
F. O'Loughlin ◽  
F. Pappenberger ◽  
P. B. Bates ◽  
...  

Abstract. Catastrophe risk models used by the insurance industry are likely subject to significant uncertainty, but due to their proprietary nature and strict licensing conditions they are not available for experimentation. In addition, even if such experiments were conducted, these would not be repeatable by other researchers because commercial confidentiality issues prevent the details of proprietary catastrophe model structures from being described in public domain documents. However, such experimentation is urgently required to improve decision making in both insurance and reinsurance markets. In this paper we therefore construct our own catastrophe risk model for flooding in Dublin, Ireland, in order to assess the impact of typical precipitation data uncertainty on loss predictions. As we consider only a city region rather than a whole territory and have access to detailed data and computing resources typically unavailable to industry modellers, our model is significantly more detailed than most commercial products. The model consists of four components, a stochastic rainfall module, a hydrological and hydraulic flood hazard module, a vulnerability module, and a financial loss module. Using these we undertake a series of simulations to test the impact of driving the stochastic event generator with four different rainfall data sets: ground gauge data, gauge-corrected rainfall radar, meteorological reanalysis data (European Centre for Medium-Range Weather Forecasts Reanalysis-Interim; ERA-Interim) and a satellite rainfall product (The Climate Prediction Center morphing method; CMORPH). Catastrophe models are unusual because they use the upper three components of the modelling chain to generate a large synthetic database of unobserved and severe loss-driving events for which estimated losses are calculated. We find the loss estimates to be more sensitive to uncertainties propagated from the driving precipitation data sets than to other uncertainties in the hazard and vulnerability modules, suggesting that the range of uncertainty within catastrophe model structures may be greater than commonly believed.

2014 ◽  
Vol 11 (1) ◽  
pp. 31-81 ◽  
Author(s):  
C. C. Sampson ◽  
T. J. Fewtrell ◽  
F. O'Loughlin ◽  
F. Pappenberger ◽  
P. B. Bates ◽  
...  

Abstract. Catastrophe risk models used by the insurance industry are likely subject to significant uncertainty, but due to their proprietary nature and strict licensing conditions they are not available for experimentation. In addition, even if such experiments were conducted, these would not be repeatable by other researchers because commercial confidentiality issues prevent the details of proprietary catastrophe model structures from being described in public domain documents. However, such experimentation is urgently required to improve decision making in both insurance and re-insurance markets. In this paper we therefore construct our own catastrophe risk model for flooding in Dublin, Ireland in order to assess the impact of typical precipitation data uncertainty on loss predictions. As we consider only a city region rather than a whole territory and have access to detailed data and computing resources typically unavailable to industry modellers, our model is significantly more detailed than commercial products. The model consists of four components, a stochastic rainfall module, a hydrological and hydraulic flood hazard module, a vulnerability module and a financial loss module. Using these we undertake a series of simulations to test the impact of driving the stochastic event generator with four different rainfall data sets: ground gauge data, gauge corrected rainfall radar, meteorological re-analysis data (ERA-Interim) and a satellite rainfall product (CMORPH). Catastrophe models are unusual because they use the upper three components of the modelling chain to generate a large synthetic database of unobserved and severe loss-driving events for which estimated losses are calculated. We find these loss estimates to be highly sensitive to uncertainties propagated from the driving observational datasets, suggesting that the range of uncertainty within catastrophe model structures may be greater than commonly believed.


2012 ◽  
Vol 52 (2) ◽  
pp. 695
Author(s):  
Tom Wilson

As hydrocarbon resources become more marginal, reducing data uncertainty associated with exploration and appraisal drilling becomes more important. For gas condensate fields where a great deal of the value is stored within the recovered liquids, variations in the condensate gas ratio (CGRs), or gas expansion factors (Bg) for example can have a considerable impact on the development strategy. Understanding the fluid in place is often key to the decision as to whether a development should go ahead or not. A considerable proportion of wells are drilled with oil-based mud. When fluid sampling is executed, the in-place reservoir fluid is often contaminated to various levels with this mud. The intention of this extended abstract is to gain an understanding of how much contamination can be accurately corrected. Several case studies are undertaken incorporating fluids from the same fields obtained in different fashions. Contaminated samples sourced from small volume probes on open-hole formations, for example an MDT or RCI tool, are compared with assumed clean samples obtained downhole during drill stem tests (DST). A suitable equation of state (EOS) is then selected and tuned to match experimentally derived results from both data sets. The small probe sample is then numerically decontaminated for mud, producing a clean EOS for comparison. By quantifying the differences in simulated depletion experiments, a judgement can then be made as to how much contamination can accurately be compensated.


Buildings ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 475
Author(s):  
Omar M. Nofal ◽  
John W. van de van de Lindt ◽  
Harvey Cutler ◽  
Martin Shields ◽  
Kevin Crofton

The growing number of flood disasters worldwide and the subsequent catastrophic consequences of these events have revealed the flood vulnerability of communities. Flood impact predictions are essential for better flood risk management which can result in an improvement of flood preparedness for vulnerable communities. Early flood warnings can provide households and business owners additional time to save certain possessions or products in their buildings. This can be accomplished by elevating some of the water-sensitive components (e.g., appliances, furniture, electronics, etc.) or installing a temporary flood barrier. Although many qualitative and quantitative flood risk models have been developed and highlighted in the literature, the resolution used in these models does not allow a detailed analysis of flood mitigation at the building- and community level. Therefore, in this article, a high-fidelity flood risk model was used to provide a linkage between the outputs from a high-resolution flood hazard model integrated with a component-based probabilistic flood vulnerability model to account for the damage for each building within the community. The developed model allowed to investigate the benefits of using a precipitation forecast system that allows a lead time for the community to protect its assets and thereby decreasing the amount of flood-induced losses.


2020 ◽  
Vol 24 (7) ◽  
pp. 3725-3735
Author(s):  
Ali Fallah ◽  
Sungmin O ◽  
Rene Orth

Abstract. Precipitation is a crucial variable for hydro-meteorological applications. Unfortunately, rain gauge measurements are sparse and unevenly distributed, which substantially hampers the use of in situ precipitation data in many regions of the world. The increasing availability of high-resolution gridded precipitation products presents a valuable alternative, especially over poorly gauged regions. This study examines the usefulness of current state-of-the-art precipitation data sets in hydrological modeling. For this purpose, we force a conceptual hydrological model with multiple precipitation data sets in >200 European catchments to obtain runoff and evapotranspiration. We consider a wide range of precipitation products, which are generated via (1) the interpolation of gauge measurements (E-OBS and Global Precipitation Climatology Centre (GPCC) V.2018), (2)  data assimilation into reanalysis models (ERA-Interim, ERA5, and Climate Forecast System Reanalysis – CFSR), and (3) a combination of multiple sources (Multi-Source Weighted-Ensemble Precipitation; MSWEP V2). Evaluation is done at the daily and monthly timescales during the period of 1984–2007. We find that simulated runoff values are highly dependent on the accuracy of precipitation inputs; in contrast, simulated evapotranspiration is generally much less influenced in our comparatively wet study region. We also find that the impact of precipitation uncertainty on simulated runoff increases towards wetter regions, while the opposite is observed in the case of evapotranspiration. Finally, we perform an indirect performance evaluation of the precipitation data sets by comparing the runoff simulations with streamflow observations. Thereby, E-OBS yields the particularly strong agreement, while ERA5, GPCC V.2018, and MSWEP V2 show good performances. We further reveal climate-dependent performance variations of the considered data sets, which can be used to guide their future development. The overall best agreement is achieved when using an ensemble mean generated from all the individual products. In summary, our findings highlight a climate-dependent propagation of precipitation uncertainty through the water cycle; while runoff is strongly impacted in comparatively wet regions, such as central Europe, there are increasing implications for evapotranspiration in drier regions.


2021 ◽  
Vol 2021 (7) ◽  
Author(s):  
Jeremy Baron ◽  
Daniel Reichelt ◽  
Steffen Schumann ◽  
Niklas Schwanemann ◽  
Vincent Theeuwes

Abstract Soft-drop grooming of hadron-collision final states has the potential to significantly reduce the impact of non-perturbative corrections, and in particular the underlying-event contribution. This eventually will enable a more direct comparison of accurate perturbative predictions with experimental measurements. In this study we consider soft-drop groomed dijet event shapes. We derive general results needed to perform the resummation of suitable event-shape variables to next-to-leading logarithmic (NLL) accuracy matched to exact next-to-leading order (NLO) QCD matrix elements. We compile predictions for the transverse-thrust shape accurate to NLO + NLL′ using the implementation of the Caesar formalism in the Sherpa event generator framework. We complement this by state-of-the-art parton- and hadron-level predictions based on NLO QCD matrix elements matched with parton showers. We explore the potential to mitigate non-perturbative corrections for particle-level and track-based measurements of transverse thrust by considering a wide range of soft-drop parameters. We find that soft-drop grooming indeed is very efficient in removing the underlying event. This motivates future experimental measurements to be compared to precise QCD predictions and employed to constrain non-perturbative models in Monte-Carlo simulations.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Yahya Albalawi ◽  
Jim Buckley ◽  
Nikola S. Nikolov

AbstractThis paper presents a comprehensive evaluation of data pre-processing and word embedding techniques in the context of Arabic document classification in the domain of health-related communication on social media. We evaluate 26 text pre-processings applied to Arabic tweets within the process of training a classifier to identify health-related tweets. For this task we use the (traditional) machine learning classifiers KNN, SVM, Multinomial NB and Logistic Regression. Furthermore, we report experimental results with the deep learning architectures BLSTM and CNN for the same text classification problem. Since word embeddings are more typically used as the input layer in deep networks, in the deep learning experiments we evaluate several state-of-the-art pre-trained word embeddings with the same text pre-processing applied. To achieve these goals, we use two data sets: one for both training and testing, and another for testing the generality of our models only. Our results point to the conclusion that only four out of the 26 pre-processings improve the classification accuracy significantly. For the first data set of Arabic tweets, we found that Mazajak CBOW pre-trained word embeddings as the input to a BLSTM deep network led to the most accurate classifier with F1 score of 89.7%. For the second data set, Mazajak Skip-Gram pre-trained word embeddings as the input to BLSTM led to the most accurate model with F1 score of 75.2% and accuracy of 90.7% compared to F1 score of 90.8% achieved by Mazajak CBOW for the same architecture but with lower accuracy of 70.89%. Our results also show that the performance of the best of the traditional classifier we trained is comparable to the deep learning methods on the first dataset, but significantly worse on the second dataset.


2021 ◽  
pp. 000276422110216
Author(s):  
Kazimierz M. Slomczynski ◽  
Irina Tomescu-Dubrow ◽  
Ilona Wysmulek

This article proposes a new approach to analyze protest participation measured in surveys of uneven quality. Because single international survey projects cover only a fraction of the world’s nations in specific periods, researchers increasingly turn to ex-post harmonization of different survey data sets not a priori designed as comparable. However, very few scholars systematically examine the impact of the survey data quality on substantive results. We argue that the variation in source data, especially deviations from standards of survey documentation, data processing, and computer files—proposed by methodologists of Total Survey Error, Survey Quality Monitoring, and Fitness for Intended Use—is important for analyzing protest behavior. In particular, we apply the Survey Data Recycling framework to investigate the extent to which indicators of attending demonstrations and signing petitions in 1,184 national survey projects are associated with measures of data quality, controlling for variability in the questionnaire items. We demonstrate that the null hypothesis of no impact of measures of survey quality on indicators of protest participation must be rejected. Measures of survey documentation, data processing, and computer records, taken together, explain over 5% of the intersurvey variance in the proportions of the populations attending demonstrations or signing petitions.


2012 ◽  
Vol 2012 ◽  
pp. 1-18 ◽  
Author(s):  
Magdalena Murawska ◽  
Dimitris Rizopoulos ◽  
Emmanuel Lesaffre

In transplantation studies, often longitudinal measurements are collected for important markers prior to the actual transplantation. Using only the last available measurement as a baseline covariate in a survival model for the time to graft failure discards the whole longitudinal evolution. We propose a two-stage approach to handle this type of data sets using all available information. At the first stage, we summarize the longitudinal information with nonlinear mixed-effects model, and at the second stage, we include the Empirical Bayes estimates of the subject-specific parameters as predictors in the Cox model for the time to allograft failure. To take into account that the estimated subject-specific parameters are included in the model, we use a Monte Carlo approach and sample from the posterior distribution of the random effects given the observed data. Our proposal is exemplified on a study of the impact of renal resistance evolution on the graft survival.


1994 ◽  
Vol 33 (04) ◽  
pp. 390-396 ◽  
Author(s):  
J. G. Stewart ◽  
W. G. Cole

Abstract:Metaphor graphics are data displays designed to look like corresponding variables in the real world, but in a non-literal sense of “look like”. Evaluation of the impact of these graphics on human problem solving has twice been carried out, but with conflicting results. The present experiment attempted to clarify the discrepancies between these findings by using a complex task in which expert subjects interpreted respiratory data. The metaphor graphic display led to interpretations twice as fast as a tabular (flowsheet) format, suggesting that conflict between earlier studies is due either to differences in training or to differences in goodness of metaphor, Findings to date indicate that metaphor graphics work with complex as well as simple data sets, pattern detection as well as single number reporting tasks, and with expert as well as novice subjects.


2015 ◽  
Vol 8 (1) ◽  
pp. 421-434 ◽  
Author(s):  
M. P. Jensen ◽  
T. Toto ◽  
D. Troyan ◽  
P. E. Ciesielski ◽  
D. Holdridge ◽  
...  

Abstract. The Midlatitude Continental Convective Clouds Experiment (MC3E) took place during the spring of 2011 centered in north-central Oklahoma, USA. The main goal of this field campaign was to capture the dynamical and microphysical characteristics of precipitating convective systems in the US Central Plains. A major component of the campaign was a six-site radiosonde array designed to capture the large-scale variability of the atmospheric state with the intent of deriving model forcing data sets. Over the course of the 46-day MC3E campaign, a total of 1362 radiosondes were launched from the enhanced sonde network. This manuscript provides details on the instrumentation used as part of the sounding array, the data processing activities including quality checks and humidity bias corrections and an analysis of the impacts of bias correction and algorithm assumptions on the determination of convective levels and indices. It is found that corrections for known radiosonde humidity biases and assumptions regarding the characteristics of the surface convective parcel result in significant differences in the derived values of convective levels and indices in many soundings. In addition, the impact of including the humidity corrections and quality controls on the thermodynamic profiles that are used in the derivation of a large-scale model forcing data set are investigated. The results show a significant impact on the derived large-scale vertical velocity field illustrating the importance of addressing these humidity biases.


Sign in / Sign up

Export Citation Format

Share Document