scholarly journals Comparison of storm damage functions and their performance

2015 ◽  
Vol 15 (4) ◽  
pp. 769-788 ◽  
Author(s):  
B. F. Prahl ◽  
D. Rybski ◽  
O. Burghoff ◽  
J. P. Kropp

Abstract. Winter storms are the most costly natural hazard for European residential property. We compare four distinct storm damage functions with respect to their forecast accuracy and variability, with particular regard to the most severe winter storms. The analysis focuses on daily loss estimates under differing spatial aggregation, ranging from district to country level. We discuss the broad and heavily skewed distribution of insured losses posing difficulties for both the calibration and the evaluation of damage functions. From theoretical considerations, we provide a synthesis between the frequently discussed cubic wind–damage relationship and recent studies that report much steeper damage functions for European winter storms. The performance of the storm loss models is evaluated for two sources of wind gust data, direct observations by the German Weather Service and ERA-Interim reanalysis data. While the choice of gust data has little impact on the evaluation of German storm loss, spatially resolved coefficients of variation reveal dependence between model and data choice. The comparison shows that the probabilistic models by Heneka et al. (2006) and Prahl et al. (2012) both provide accurate loss predictions for moderate to extreme losses, with generally small coefficients of variation. We favour the latter model in terms of model applicability. Application of the versatile deterministic model by Klawa and Ulbrich (2003) should be restricted to extreme loss, for which it shows the least bias and errors comparable to the probabilistic model by Prahl et al. (2012).

2014 ◽  
Vol 2 (9) ◽  
pp. 5835-5887
Author(s):  
B. F. Prahl ◽  
D. Rybski ◽  
O. Burghoff ◽  
J. P. Kropp

Abstract. Winter storms are the most costly natural hazard for European residential property. We compare four distinct storm damage functions with respect to their forecast accuracy and variability, with particular regard to the most severe winter storms. The analysis focuses on daily loss estimates under differing spatial aggregation, ranging from district to country level. We discuss the broad and heavily skewed distribution of insured losses posing difficulties on both the calibration and the evaluation of damage functions. From theoretical considerations, we provide a synthesis between the frequently discussed cubic damage-wind relationship and recent studies that report much steeper damage functions for European winter storms. The performance of the storm loss models is evaluated for two wind data sources, direct observation by the German Weather Service and ERA Interim reanalysis data. While the choice of wind data indicates little impact for the evaluation of German storm loss, local variability exhibits dependence between model and data choices. Based on our analysis, we favour the application of two probabilistic approaches which fare best in terms of the accuracy of their expected value and overall exhibit the lowest amount of variability.


2012 ◽  
Vol 64 (1) ◽  
pp. 17471 ◽  
Author(s):  
Kai Born ◽  
Patrick Ludwig ◽  
Joaquim G. Pinto
Keyword(s):  

2020 ◽  
Vol 8 (12) ◽  
pp. 1015
Author(s):  
Alicia Takbash ◽  
Ian R. Young

A non-stationary extreme value analysis of 41 years (1979–2019) of global ERA5 (European Centre for Medium-Range Weather Forecasts Reanalysis) significant wave height data is undertaken to investigate trends in the values of 100-year significant wave height, Hs100. The analysis shows that there has been a statistically significant increase in the value of Hs100 over large regions of the Southern Hemisphere. There have also been smaller decreases in Hs100 in the Northern Hemisphere, although the related trends are generally not statistically significant. The increases in the Southern Hemisphere are a result of an increase in either the frequency or intensity of winter storms, particularly in the Southern Ocean.


2009 ◽  
Vol 23 (4) ◽  
pp. 675-698 ◽  
Author(s):  
Thomas Voice

Recent advances in the mathematical analysis of flow control have prompted the creation of the Scalable TCP (STCP) and Exponential RED (E-RED) algorithms. These are designed to be scalable under the popular deterministic delay stability modeling framework. In this article, we analyze stochastic models of STCP and STCP combined with E-RED link behavior. We find that under certain plausible network conditions, these probabilistic models also exhibit scalable behavior. In particular, we derive parameter choice schemes for which the equilibrium coefficients of variation of flow rates are bounded, however large, fast, or complex the network. Our model is shown to exhibit behavior similar to the mean field convergence that has recently been observed in TCP.


2021 ◽  
Author(s):  
Stéphanie Leroux ◽  
Jean-Michel Brankart ◽  
Aurélie Albert ◽  
Jean-Marc Molines ◽  
Laurent Brodeau ◽  
...  

<p>In this contribution, we investigate the predictability properties of the ocean dynamics using an ensemble of medium range numerical forecasts. This question is particularly relevant for ocean dynamics at small scales (< 30 km), where sub-mesoscale dynamics is responsible for the fast evolution of ocean properties. Relatively little is known about the predictability properties of a high resolution model, and hence about the accuracy and resolution that is needed from the observation system used to generate the initial conditions.</p><p>A kilometric-scale regional configuration of NEMO for the Western Mediterranean (MEDWEST60, at 1/60º horizontal resolution) has been developed, using boundary conditions from a larger  North Atlantic configuration at same resolution (eNATL60). This deterministic model has then been transformed into a probabilistic model by introducing innovative stochastic parameterizations of model uncertainties resulting from unresolved processes. The purpose is here primarily to generate ensembles of  model states to initialize predictability experiments. The stochastic parameterization is also applied to assess the possible impact of irreducible model uncertainties on the skill of the forecast. A set of three ensemble experiments (20 members and 2 months ) are performed, one  with the deterministic model initiated with perturbed initial conditions, and two with the stochastic model, for two different amplitudes of model uncertainty. In all three experiments, the spread of the ensemble is shown to emerge from the small scales (10 km wavelength) and progressively upscales to the largest structures. After two months, the ensemble variance saturates over most of the spectrum (except in the largest scales), whereas the small scales (< 30 km) are fully decorrelated between the different members. These ensemble simulations are thus appropriate to provide a statistical description of the dependence between initial accuracy and forecast accuracy over the full range of potentially-useful forecast time-lags (typically, between 1 and 20 days).   </p><p>The predictability properties are statistically assessed using a cross-validation algorithm (i.e. using alternatively each ensemble member as the reference truth and the remaining 19 members as the ensemble forecast) together with a specific score to characterize the initial and forecast accuracy. From the joint distribution of initial and final scores, it is then possible to quantify the probability distribution of the forecast score given the initial score, or reciprocally to derive conditions on the initial accuracy to obtain a target forecast skill. In this contribution, the misfit between ensemble members is quantified in terms of overall accuracy (CRPS score), geographical position of the ocean structures (location score), and  spatial spectral decorrelation of the Sea Surface Height 2-D fields (spectral score). For example, our results show that, in the region and period  of interest, the initial location accuracy required (necessary condition) with a perfect model (deterministic) to obtain a location accuracy of the forecast of 10 km with a 95% confidence is about 8 km for a 1-day forecast, 4 km for a 5-day forecast, 1.5 km for a 10-day forecast, and this requirement cannot be met with a 15-day or longer forecast.</p>


2012 ◽  
Vol 49 (1) ◽  
pp. 27-44 ◽  
Author(s):  
Chih-Sheng Ku ◽  
C. Hsein Juang ◽  
Chi-Wen Chang ◽  
Jianye Ching

The Robertson and Wride method is the most widely used cone penetration test (CPT)-based method for soil liquefaction evaluation. This method is a deterministic model, which expresses liquefaction potential in terms of factor of safety. On many occasions, there is a need to express the liquefaction potential in terms of liquefaction probability. Although several probabilistic models are available in the literature, there is an advantage having a probabilistic version of the Robertson and Wride method so that the engineer who prefers to use this method can obtain additional information of liquefaction probability with minimal extra effort. In this paper, a simple model is developed, which links the factor of safety determined by the Robertson and Wride method to the liquefaction probability. The model, referred to as the probabilistic RW model, is developed, and verified, in a mathematically rigorous manner. Simplified equations for assessing the variation of liquefaction probability caused by the uncertainty in input parameters are also developed. Example applications are presented to demonstrate the developed models.


2021 ◽  
Author(s):  
Andreas Trojand ◽  
Nico Becker ◽  
Henning Rust

<p>Severe winter storms are one of the most damaging natural hazards for European residential buildings. Previous studies mainly focused on the loss ratio (loss value / total insured sum) as a monetary value for damages. In this study the focus is on the claim ratio (number of claims / number of contracts), which is derived from a storm loss dataset provided by the German Insurance Association. Due to its magnitude, the claim ratio might be a more intuitive parameter for the use in impact-based warnings than the loss ratio.</p><p>In a first step, loss ratios and claim ratios in German administrative districts are compared to investigate differences and similarities between the two variables. While there is no significant change in the ratio between claim ratio and loss ratio with increasing wind speeds, a tendency for lower loss ratios in urban areas can be confirmed.</p><p>In a second step, a generalized linear model for daily claim ratios is developed using daily maximum wind gust (ERA5) and different non-meteorological indicators for vulnerability and exposure as predictor variables. The non-meteorological predictors are derived from the Census 2011. They include information about the district-average construction years, the number of apartments per buildings and others to get a better understanding of these factors concerning the number of buildings affected by windstorms. The modelling procedure is divided into two steps. First, a logistic regression model is used to model the probabilty claim ratios larger than zero. Second, generalized linear models with different link functions are compared regarding their ability to predict claim ratios larger than zero. In a cross-validation setting a criteria for model selection is implemented and the models of both steps are verified. Both steps show an improvement over the climatological forecast and in both cases the addition of data for vulnerability and exposure leads to in decrease of the mean squared error. </p>


Atmosphere ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 1667
Author(s):  
Jianhong Wang ◽  
Nour Alakol ◽  
Xing Wang ◽  
Dongpo He ◽  
Kanike Raghavendra Kumar ◽  
...  

The Eastern inland of Syria has a Mediterranean climate in the north and a tropical desert climate in the south, which results in a dry south and wet north climate feature, especially in winter. The circulation dynamics analysis of 16 winter strong precipitation events shows that the key system is the dry and warm front cyclone. In most cases (81–100% of the 16 cases), the moisture content in the northern part of the cyclone is higher than that in the southern part (influenced by the Mediterranean climate zone). The humidity in the middle layer is higher than that near the surface (uplifting of the dry warm front), and the thickness of the wet layer and the vertical ascending layer obviously expands upward (as shown by the satellite cloud top reflection). These characteristics lead to the moisture thermodynamic instability in the eastern part of the cyclone (dry and warm air at low level and wet and cold air at upper level). The cyclone flow transports momentum to the local humid layer of the Mediterranean climate belt and then causes unstable conditions and strong rainfall. Considering the limitations of the Syrian ground station network, the NCEP/CFSR global reanalysis data and MODIS aqua-3 cloud parameter data are used to build a multi-source factor index of winter precipitation from 2002 to 2016. A decision tree prediction model is then established and the factors index is constructed into tree shapes by the nodes and branches through calculating rules of information entropy. The suitable tree shape models are adjusted and selected by an automated training and testing process. The forecast model can classify rainfall with a forecast accuracy of more than 90% for strong rainfall over 30 mm.


2015 ◽  
Vol 3 (11) ◽  
pp. 6845-6881 ◽  
Author(s):  
B. F. Prahl ◽  
D. Rybski ◽  
M. Boettle ◽  
J. P. Kropp

Abstract. Most climate change impacts manifest in the form of natural hazards. For example, sea-level rise and changes in storm climatology are expected to increase the frequency and magnitude of flooding events. In practice there is a need for comprehensive damage assessment at an intermediate level of complexity. Answering this need, we reveal the common grounds of macroscale damage functions employed in storm damage, coastal-flood damage, and heat mortality assessment. The universal approach offers both bottom-up and top-down damage evaluation, employing either an explicit or an implicit portfolio description. Putting emphasis on the treatment of data uncertainties, we perform a sensitivity analysis across different scales. We find that the behaviour of intrinsic uncertainties on the microscale level (i.e. single item) does still persist on the macroscale level (i.e. portfolio). Furthermore, the analysis of uncertainties can reveal their specific relevance, allowing for simplification of the modelling chain. Our results shed light on the role of uncertainties and provide useful insight for the application of a unified damage function.


2020 ◽  
Author(s):  
Thomas Röösli ◽  
Christoph Welker ◽  
David Bresch

<p>We compare the risk assessment for storm related building damage based on three different foundations: (1) insurance claims data, (2) modelled building damages based on a historic event set of wind gust data, and (3) modelled building damages based on a probabilistic extension of the historic event set. Windstorms cause large socio-economic damages in Europe. In the canton of Zurich (Switzerland) they are responsible for one third of the building damages caused by natural hazards.</p><p>The Wind Storm Information Service (WISC) of the Copernicus Climate Change Service provides open wind gust datasets for the insurance sector to understand and assess the risk of windstorms in Europe. This is the first open climatological data set covering a longer time range than the insurance claims data of most small insurance companies. Our science-practice collaboration is a case study to illustrate how climatological data can be used in risk assessments in the insurance sector and how this approach compares to risk assessments based on proprietary claims data. We describe and use a storm damage model that combines wind gust data with exposure and vulnerability information to compute an event set of modelled building damages. These modelled damages are used to calculate relevant risk metrics for the insurance industry like the annual expected damage (AED) as well as the damage of rare events, with a return period of up to 250 years.</p><p>The AED calculated based on the insurance claims data (i.e. the mean damage over the observation period of 35 years) is 2.34 million Swiss Francs (CHF). This is almost double the value of the AED computed based on the storm damage model and historic event set (CHF 1.36 million). The storm Lothar/Martin in December 1999 is the most damaging event in the insurance claims data (CHF 62.4 million) as well as the historic event set (modelled building damage of CHF 62.7 million).</p><p>Both the insurance claims data and the modelled building damages based on historic events are not well suited to derive information about rare events with return periods considerably exceeding the observation period. To provide some information about rare events, we propose a new probabilistic event set, by introducing various perturbations, resulting in 4’200 events. This probabilistic event set results in an AED of CHF 1.45 million and a damage amount of CHF 75 million for a return period of 250 years. The probabilistic event set allows for testing the sensitivity of the risk to e.g. portfolio changes and changes in the insurance condition for events of a higher intensity than the historic events.</p><p>Our analysis is implemented in the GVZ’s proprietary storm damage model as well as the open-source risk assessment platform CLIMADA (https://github.com/CLIMADA-project/climada_python). This guarantees scientific reproducibility and offers insurance companies the opportunity to apply this methodology to their own portfolio with a low entry threshold.</p>


Sign in / Sign up

Export Citation Format

Share Document