Spatial stochastic simulation to aid local extreme value analysis of cyclone-induced wave heights when numerical hydrodynamic simulations are scarce

Author(s):  
Jeremy Rohmer ◽  
Rodrigo Pedreros ◽  
Yann Krien

<p>To estimate return levels of wave heights (Hs) induced by tropical cyclones at the coast, a commonly-used approach is to (1) randomly generate a large number of synthetic cyclone events (typically >1,000); (2) numerically simulate the corresponding Hs over the whole domain of interest; (3) extract the Hs values at the desired location at the coast and (4) perform the local extreme value analysis (EVA) to derive the corresponding return level. Step 2 is however very constraining because it often involves a numerical hydrodynamic simulator that can be prohibitive to run: this might limit the number of results to perform the local EVA (typically to several hundreds). In this communication, we propose a spatial stochastic simulation procedure to increase the database size of numerical results with synthetic maps of Hs that are stochastically generated. To do so, we propose to rely on a data-driven dimensionality-reduction method, either unsupervised (Principal Component Analysis) or supervised (Partial Least Squares Regression), that is trained with a limited number of pre-existing numerically simulated Hs maps. The procedure is applied to the Guadeloupe island and results are compared to the commonly-used approach applied to a large database of Hs values computed for nearly 2,000 synthetic cyclones (representative of 3,200 years – Krien et al., NHESS, 2015). When using only a hundred of cyclones, we show that the estimates of the 100-year return levels can be achieved with a mean absolute percentage error (derived from a bootstrap-based procedure) ranging between 5 and 15% around the coasts while keeping the width of the 95% confidence interval of the same order of magnitude than the one using the full database. Without synthetic Hs maps augmentation, the error and confidence interval width are both increased by nearly 100%. A careful attention is paid to the tuning of the approach by testing the sensitivity to the spatial domain size, the information loss due to data compression, and the number of cyclones. This study has been carried within the Carib-Coast INTERREG project (https://www.interreg-caraibes.fr/carib-coast).</p>

Author(s):  
Toshikazu Kitano

There are several arguments to be discussed for the probability of hazards due to the storm surge. One is a common point of describing the uncertainty of extreme events, and another point is for the special case due to the storm surge. 1) Return level is one of the important results by extreme value analysis, and the confidence interval also serves us an useful and desirable information for uncertainty. Is it true? The answer is negative. Return period is right, and important. But the confidence interval, in this case, is shown for return level which is the constant value that is significant after the repeating encounters of the exceedance levels over very long period. But it is of our interest to know which value is the successively occurring level in the future return period, which is a stochastic variable. It is not a constant value but unknown even for the God. The prediction interval should be employed for the next realized value of our interest.


2020 ◽  
Author(s):  
Torben Schmith ◽  
Peter Thejll ◽  
Peter Berg ◽  
Fredrik Boberg ◽  
Ole Bøssing Christensen ◽  
...  

Abstract. Severe precipitation events occur rarely and are often localized in space and of short duration; but they are important for societal managing of infrastructure. Therefore, there is a demand for estimating future changes in the statistics of these rare events. These are usually projected using Regional Climate Model (RCM) scenario simulations combined with extreme value analysis to obtain selected return levels of precipitation intensity. However, due to imperfections in the formulation of the physical parameterizations in the RCMs, the simulated present-day climate usually has biases relative to observations. Therefore, the RCM results are often bias-adjusted to match observations. This does, however, not guarantee that bias-adjusted projected results will match future reality better, since the bias may change in a changed climate. In the present work we evaluate different bias adjustment techniques in a changing climate. This is done in an inter-model cross-validation setup, in which each model simulation in turn plays the role of pseudo-reality, against which the remaining model simulations are bias adjusted and validated. The study uses hourly data from present-day and RCP8.5 late 21st century from 19 model simulations from the EURO-CORDEX ensemble at 0.11° resolution, from which fields of selected return levels are calculated for hourly and daily time scale. The bias adjustment techniques applied to the return levels are based on extreme value analysis and include analytical quantile-matching together with the simpler climate factor approach. Generally, return levels can be improved by bias adjustment, compared to obtaining them from raw scenarios. The performance of the different methods depends of the time scale considered. On hourly time scale, the climate factor approach performs better than the quantile-matching approaches. On daily time scale, the superior approach is to simply deduce future return levels from observations and the second best choice is using the quantile-mapping approaches. These results are found in all European sub-regions considered.


Author(s):  
Lu Deng ◽  
Zhengjun Zhang

Extreme smog can have potentially harmful effects on human health, the economy and daily life. However, the average (mean) values do not provide strategically useful information on the hazard analysis and control of extreme smog. This article investigates China's smog extremes by applying extreme value analysis to hourly PM2.5 data from 2014 to 2016 obtained from monitoring stations across China. By fitting a generalized extreme value (GEV) distribution to exceedances over a station-specific extreme smog level at each monitoring location, all study stations are grouped into eight different categories based on the estimated mean and shape parameter values of fitted GEV distributions. The extreme features characterized by the mean of the fitted extreme value distribution, the maximum frequency and the tail index of extreme smog at each location are analysed. These features can provide useful information for central/local government to conduct differentiated treatments in cities within different categories and conduct similar prevention goals and control strategies among those cities belonging to the same category in a range of areas. Furthermore, hazardous hours, breaking probability and the 1-year return level of each station are demonstrated by category, based on which the future control and reduction targets of extreme smog are proposed for the cities of Beijing, Tianjin and Hebei as an example.


2017 ◽  
Vol 21 (10) ◽  
pp. 5385-5399 ◽  
Author(s):  
Edouard Goudenhoofdt ◽  
Laurent Delobbe ◽  
Patrick Willems

Abstract. In Belgium, only rain gauge time series have been used so far to study extreme rainfall at a given location. In this paper, the potential of a 12-year quantitative precipitation estimation (QPE) from a single weather radar is evaluated. For the period 2005–2016, 1 and 24 h rainfall extremes from automatic rain gauges and collocated radar estimates are compared. The peak intensities are fitted to the exponential distribution using regression in Q-Q plots with a threshold rank which minimises the mean squared error. A basic radar product used as reference exhibits unrealistic high extremes and is not suitable for extreme value analysis. For 24 h rainfall extremes, which occur partly in winter, the radar-based QPE needs a bias correction. A few missing events are caused by the wind drift associated with convective cells and strong radar signal attenuation. Differences between radar and gauge rainfall values are caused by spatial and temporal sampling, gauge underestimations and radar errors. Nonetheless the fit to the QPE data is within the confidence interval of the gauge fit, which remains large due to the short study period. A regional frequency analysis for 1 h duration is performed at the locations of four gauges with 1965–2008 records using the spatially independent QPE data in a circle of 20 km. The confidence interval of the radar fit, which is small due to the sample size, contains the gauge fit for the two closest stations from the radar. In Brussels, the radar extremes are significantly higher than the gauge rainfall extremes, but similar to those observed by an automatic gauge during the same period. The extreme statistics exhibit slight variations related to topography. The radar-based extreme value analysis can be extended to other durations.


2017 ◽  
Vol 137 ◽  
pp. 138-150 ◽  
Author(s):  
F. Silva-González ◽  
E. Heredia-Zavoni ◽  
G. Inda-Sarmiento

2021 ◽  
Vol 25 (1) ◽  
pp. 273-290
Author(s):  
Torben Schmith ◽  
Peter Thejll ◽  
Peter Berg ◽  
Fredrik Boberg ◽  
Ole Bøssing Christensen ◽  
...  

Abstract. Severe precipitation events occur rarely and are often localised in space and of short duration, but they are important for societal managing of infrastructure. Therefore, there is a demand for estimating future changes in the statistics of the occurrence of these rare events. These are often projected using data from regional climate model (RCM) simulations combined with extreme value analysis to obtain selected return levels of precipitation intensity. However, due to imperfections in the formulation of the physical parameterisations in the RCMs, the simulated present-day climate usually has biases relative to observations; these biases can be in the mean and/or in the higher moments. Therefore, the RCM results are adjusted to account for these deficiencies. However, this does not guarantee that the adjusted projected results will match the future reality better, since the bias may not be stationary in a changing climate. In the present work, we evaluate different adjustment techniques in a changing climate. This is done in an inter-model cross-validation set-up in which each model simulation, in turn, performs pseudo-observations against which the remaining model simulations are adjusted and validated. The study uses hourly data from historical and RCP8.5 scenario runs from 19 model simulations from the EURO-CORDEX ensemble at a 0.11∘ resolution. Fields of return levels for selected return periods are calculated for hourly and daily timescales based on 25-year-long time slices representing the present-day (1981–2005) and end-21st-century (2075–2099). The adjustment techniques applied to the return levels are based on extreme value analysis and include climate factor and quantile-mapping approaches. Generally, we find that future return levels can be improved by adjustment, compared to obtaining them from raw scenario model data. The performance of the different methods depends on the timescale considered. On hourly timescales, the climate factor approach performs better than the quantile-mapping approaches. On daily timescales, the superior approach is to simply deduce future return levels from pseudo-observations, and the second-best choice is using the quantile-mapping approaches. These results are found in all European subregions considered. Applying the inter-model cross-validation against model ensemble medians instead of individual models does not change the overall conclusions much.


2019 ◽  
Vol 276 ◽  
pp. 04006
Author(s):  
Md Ashraful Alam ◽  
Craig Farnham ◽  
Kazuo Emura

In Bangladesh, major floods are frequent due to its unique geographic location. About one-fourth to one-third of the country is inundated by overflowing rivers during the monsoon season almost every year. Calculating the risk level of river discharge is important for making plans to protect the ecosystem and increasing crop and fish production. In recent years, several Bayesian Markov chain Monte Carlo (MCMC) methods have been proposed in extreme value analysis (EVA) for assessing the flood risk in a certain location. The Hamiltonian Monte Carlo (HMC) method was employed to obtain the approximations to the posterior marginal distribution of the Generalized Extreme Value (GEV) model by using annual maximum discharges in two major river basins in Bangladesh. The discharge records of the two largest branches of the Ganges-Brahmaputra-Meghna river system in Bangladesh for the past 42 years were analysed. To estimate flood risk, a return level with 95% confidence intervals (CI) has also been calculated. Results show that, the shape parameter of each station was greater than zero, which shows that heavy-tailed Frechet cases. One station, Bahadurabad, at Brahmaputra river basin estimated 141,387 m3s-1 with a 95% CI range of [112,636, 170,138] for 100-year return level and the 1000-year return level was 195,018 m3s-1 with a 95% CI of [122493, 267544]. The other station, Hardinge Bridge, at Ganges basin estimated 124,134 m3 s-1 with a 95% CI of [108,726, 139,543] for 100-year return level and the 1000-year return level was 170,537 m3s-1 with a 95% CI of [133,784, 207,289]. As Bangladesh is a flood prone country, the approach of Bayesian with HMC in EVA can help policy-makers to plan initiatives that could result in preventing damage to both lives and assets.


Atmosphere ◽  
2020 ◽  
Vol 11 (12) ◽  
pp. 1273
Author(s):  
Tosiyuki Nakaegawa ◽  
Takuro Kobashi ◽  
Hirotaka Kamahori

Extreme precipitation is no longer stationary under a changing climate due to the increase in greenhouse gas emissions. Nonstationarity must be considered when realistically estimating the amount of extreme precipitation for future prevention and mitigation. Extreme precipitation with a certain return level is usually estimated using extreme value analysis under a stationary climate assumption without evidence. In this study, the characteristics of extreme value statistics of annual maximum monthly precipitation in East Asia were evaluated using a nonstationary historical climate simulation with an Earth system model of intermediate complexity, capable of long-term integration over 12,000 years (i.e., the Holocene). The climatological means of the annual maximum monthly precipitation for each 100-year interval had nonstationary time series, and the ratios of the largest annual maximum monthly precipitation to the climatological mean had nonstationary time series with large spike variations. The extreme value analysis revealed that the annual maximum monthly precipitation with a return level of 100 years estimated for each 100-year interval also presented a nonstationary time series which was normally distributed and not autocorrelated, even with the preceding and following 100-year interval (lag 1). Wavelet analysis of this time series showed that significant periodicity was only detected in confined areas of the time–frequency space.


2020 ◽  
Author(s):  
Torben Schmith ◽  
Peter Thejll ◽  
Fredrik Boberg ◽  
Peter Berg ◽  
Ole Bøssing Christensen ◽  
...  

<p>Severe precipitation events occur rarely and are often localized in space and of short duration, but are important for societal managing of infrastructure such as sewage systems, metros etc. Therefore, there is a demand for estimating expected future changes in the statistics of these rare events. These are usually projected using RCM scenario runs combined with extreme value analysis to obtain selected return levels of precipitation intensity. However, due to RCM imperfections, the modelled climate for the present-day usually has errors relative to observations. Therefore, the RCM results are ‘error corrected‘ to match observations more closely in order to increase reliability of results.</p><p>In the present work we evaluate different error correction techniques and compare with non-corrected projections. This is done in an inter-model cross-validation setup, in which each model in turn plays the role of observations, against which the remaining error-corrected models are validated. The study uses hourly data (historical & RCP8.5 late 21<sup>st</sup> century) from 13 models covering the EURO-CORDEX ensemble at 0.11 degree resolution (about 12.5 km), from which fields of selected return levels are extracted for 1 h and 24 h duration. The error correction techniques applied to the return levels are based on extreme value analysis and include analytical quantile-quantile matching together with a simpler climate factor approach.</p><p>The study identifies regions where the error correction techniques perform differently, and therefore contributes to guidelines on how and where to apply calibration techniques when projecting extreme return levels.</p>


2018 ◽  
Vol 150 ◽  
pp. 05025
Author(s):  
Nor Azrita Mohd Amin ◽  
Siti Aisyah Zakaria

The main concern in environmental issue is on extreme phenomena (catastrophic) instead of common events. However, most statistical approaches are concerned primarily with the centre of a distribution or on the average value rather than the tail of the distribution which contains the extreme observations. The concept of extreme value theory affords attention to the tails of distribution where standard models are proved unreliable to analyse extreme series. High level of particulate matter (PM10) is a common environmental problem which causes various impacts to human health and material damages. If the main concern is on extreme events, then extreme value analysis provides the best result with significant evidence. The monthly average and monthly maxima PM10 data for Perlis from 2003 to 2014 were analysed. Forecasting for average data is made by Holt-Winters method while return level determine the predicted value of extreme events that occur on average once in a certain period. The forecasting from January 2015 to December 2016 for average data found that the highest forecasted value is 58.18 (standard deviation 18.45) on February 2016 while return level achieved 253.76 units for 24 months (2015-2016) return periods.


Sign in / Sign up

Export Citation Format

Share Document