Extreme Value Analysis of Tropical Cyclone Trapped-Fetch Waves

2007 ◽  
Vol 46 (10) ◽  
pp. 1501-1522 ◽  
Author(s):  
Allan W. MacAfee ◽  
Samuel W. K. Wong

Abstract Many of the extreme ocean wave events generated by tropical cyclones (TCs) can be explained by examining one component of the spectral wave field, the trapped-fetch wave (TFW). Using a Lagrangian TFW model, a parametric model representation of the local TC wind fields, and the National Hurricane Center’s hurricane database archive, a dataset of TFWs was created from all TCs in the Atlantic Ocean, Gulf of Mexico, and Caribbean Sea from 1851 to 2005. The wave height at each hourly position along a TFW trajectory was sorted into 2° × 2° latitude–longitude grid squares. Five grid squares (north of Hispaniola, Gulf of Mexico, Carolina coast, south of Nova Scotia, and south of Newfoundland) were used to determine if extreme value theory could be applied to the extremes in the TFW dataset. The statistical results justify accepting that a generalized Pareto distribution (GPD) model with a threshold of 6 m could be fitted to the data: the datasets were mostly modeled adequately, and much of the output information was useful. Additional tests were performed by sorting the TFW data into the marine areas in Atlantic Canada, which are of particular interest to the Meteorological Service of Canada because of the high ocean traffic, offshore drilling activities, and commercial fishery. GPD models were fitted, and return periods and the 95% confidence intervals (CIs) for 10-, 15-, and 20-m return levels were computed. The results further justified the use of the GPD model; hence, extension to the remaining grid squares was warranted. Of the 607 grid squares successfully modeled, the percentage of grid squares with finite lower (upper) values for the 10-, 15-, and 20-m return level CIs were 100 (80), 94 (53), and 90 (16), respectively. The lower success rate of 20-m TFW CIs was expected, given the rarity of 20-m TFWs: of the 5 713 625 hourly TFW points, only 13 958, or 0.24%, were 20 m or higher. Overall, the distribution of the successfully modeled grid squares in the data domain agreed with TFW theory and TC climatology. As a direct result of this study, the summary datasets and return level plots were integrated into application software for use by risk managers. A description of the applications illustrates their use in addressing various questions on extreme TFWs.

2015 ◽  
Vol 76 (1) ◽  
Author(s):  
Nor Azrita Mohd Amin ◽  
Mohd Bakri Adam ◽  
Ahmad Zaharin Aris

Extreme value theory is a very well-known statistical analysis for modeling extreme data in environmental management. The main focus is to compare the generalized extreme value distribution (GEV) and the generalized Pareto distribution (GPD) for modeling extreme data in terms of estimated parameters and return levels. The maximum daily PM10 data for Johor Bahru monitoring station based on a 14 years database (1997-2010) were analyzed. It is found that the parameters estimated are more comparable if the extracted numbers of extreme series for both models are much more similar. The 10-years return value for GEV is  while for GPD is . Based on the threshold choice plot, threshold  is chosen and the corresponding 10-years return level is . According to the air pollution index in Malaysia, this value is categorized as hazardous.


2021 ◽  
Author(s):  
Anne Dutfoy ◽  
Gloria Senfaute

Abstract Probabilistic Seismic Hazard Analysis (PSHA) procedures require that at least the mean activity rate be known, as well as the distribution of magnitudes. Within the Gutenberg-Richter assumption, that distribution is an Exponential distribution, upperly truncated to a maximum possible magnitude denoted $m_{max}$. This parameter is often fixed from expert judgement under tectonics considerations, due to a lack of universal method. In this paper, we propose two innovative alternatives to the Gutenberg-Richter model, based on the Extreme Value Theory and that don't require to fix a priori the value of $m_{max}$: the first one models the tail distribution magnitudes with a Generalized Pareto Distribution; the second one is a variation on the usual Gutenberg-Richter model where $m_{max}$ is a random variable that follows a distribution defined from an extreme value analysis. We use the maximum likelihood estimators taking into account the unequal observation spans depending on magnitude, the incompleteness threshold of the catalog and the uncertainty in the magnitude value itself. We apply these new recurrence models on the data observed in the Alps region, in the south of France and we integrate them into a probabilistic seismic hazard calculation to evaluate their impact on the seismic hazard levels. The proposed new recurrence models introduce a reduction of the seismic hazard level compared to the common Gutenberg-Richter model conventionally used for PSHA calculations. This decrease is significant for all frequencies below 10 Hz, mainly at the lowest frequencies and for very long return periods. To our knowledge, both new models have never been used in a probabilistic seismic hazard calculation and constitute a new promising generation of recurrence models.


2018 ◽  
Vol 150 ◽  
pp. 05025
Author(s):  
Nor Azrita Mohd Amin ◽  
Siti Aisyah Zakaria

The main concern in environmental issue is on extreme phenomena (catastrophic) instead of common events. However, most statistical approaches are concerned primarily with the centre of a distribution or on the average value rather than the tail of the distribution which contains the extreme observations. The concept of extreme value theory affords attention to the tails of distribution where standard models are proved unreliable to analyse extreme series. High level of particulate matter (PM10) is a common environmental problem which causes various impacts to human health and material damages. If the main concern is on extreme events, then extreme value analysis provides the best result with significant evidence. The monthly average and monthly maxima PM10 data for Perlis from 2003 to 2014 were analysed. Forecasting for average data is made by Holt-Winters method while return level determine the predicted value of extreme events that occur on average once in a certain period. The forecasting from January 2015 to December 2016 for average data found that the highest forecasted value is 58.18 (standard deviation 18.45) on February 2016 while return level achieved 253.76 units for 24 months (2015-2016) return periods.


2019 ◽  
Vol 8 (4) ◽  
pp. 1
Author(s):  
Queensley C. Chukwudum

Reinsurance is of utmost importance to insurers because it enables insurance companies cover risks that they, under normal circumstances, would not be able to cover on their own. An insurer needs to be able to evaluate his solvency probability and consequently, adjust his retention levels appropriately because the insurer’s retention level plays a vital role in determining the premiums he will pay to the reinsurer. To illustrate how Extreme Value theory can be applied, this study delves into modelling the probabilistic behaviour of the frequency and severity of large motor claims from the Nigerian insurance sector (2013-2016) using the Negative Binomial-Generalized Pareto distribution (NB-GPD). The annual loss distribution is simulated using the Monte Carlo method and it is used to predict the expected annual total claims and estimate the capital requirement for a year. Pricing of the Excess-of-loss (XL) reinsurance is also examined to aid insurers in optimizing their risk management decision in regards to the choice of their risk transfer position.


2021 ◽  
Author(s):  
Jeremy Rohmer ◽  
Rodrigo Pedreros ◽  
Yann Krien

<p>To estimate return levels of wave heights (Hs) induced by tropical cyclones at the coast, a commonly-used approach is to (1) randomly generate a large number of synthetic cyclone events (typically >1,000); (2) numerically simulate the corresponding Hs over the whole domain of interest; (3) extract the Hs values at the desired location at the coast and (4) perform the local extreme value analysis (EVA) to derive the corresponding return level. Step 2 is however very constraining because it often involves a numerical hydrodynamic simulator that can be prohibitive to run: this might limit the number of results to perform the local EVA (typically to several hundreds). In this communication, we propose a spatial stochastic simulation procedure to increase the database size of numerical results with synthetic maps of Hs that are stochastically generated. To do so, we propose to rely on a data-driven dimensionality-reduction method, either unsupervised (Principal Component Analysis) or supervised (Partial Least Squares Regression), that is trained with a limited number of pre-existing numerically simulated Hs maps. The procedure is applied to the Guadeloupe island and results are compared to the commonly-used approach applied to a large database of Hs values computed for nearly 2,000 synthetic cyclones (representative of 3,200 years – Krien et al., NHESS, 2015). When using only a hundred of cyclones, we show that the estimates of the 100-year return levels can be achieved with a mean absolute percentage error (derived from a bootstrap-based procedure) ranging between 5 and 15% around the coasts while keeping the width of the 95% confidence interval of the same order of magnitude than the one using the full database. Without synthetic Hs maps augmentation, the error and confidence interval width are both increased by nearly 100%. A careful attention is paid to the tuning of the approach by testing the sensitivity to the spatial domain size, the information loss due to data compression, and the number of cyclones. This study has been carried within the Carib-Coast INTERREG project (https://www.interreg-caraibes.fr/carib-coast).</p>


Author(s):  
Lu Deng ◽  
Zhengjun Zhang

Extreme smog can have potentially harmful effects on human health, the economy and daily life. However, the average (mean) values do not provide strategically useful information on the hazard analysis and control of extreme smog. This article investigates China's smog extremes by applying extreme value analysis to hourly PM2.5 data from 2014 to 2016 obtained from monitoring stations across China. By fitting a generalized extreme value (GEV) distribution to exceedances over a station-specific extreme smog level at each monitoring location, all study stations are grouped into eight different categories based on the estimated mean and shape parameter values of fitted GEV distributions. The extreme features characterized by the mean of the fitted extreme value distribution, the maximum frequency and the tail index of extreme smog at each location are analysed. These features can provide useful information for central/local government to conduct differentiated treatments in cities within different categories and conduct similar prevention goals and control strategies among those cities belonging to the same category in a range of areas. Furthermore, hazardous hours, breaking probability and the 1-year return level of each station are demonstrated by category, based on which the future control and reduction targets of extreme smog are proposed for the cities of Beijing, Tianjin and Hebei as an example.


2019 ◽  
Vol 276 ◽  
pp. 04006
Author(s):  
Md Ashraful Alam ◽  
Craig Farnham ◽  
Kazuo Emura

In Bangladesh, major floods are frequent due to its unique geographic location. About one-fourth to one-third of the country is inundated by overflowing rivers during the monsoon season almost every year. Calculating the risk level of river discharge is important for making plans to protect the ecosystem and increasing crop and fish production. In recent years, several Bayesian Markov chain Monte Carlo (MCMC) methods have been proposed in extreme value analysis (EVA) for assessing the flood risk in a certain location. The Hamiltonian Monte Carlo (HMC) method was employed to obtain the approximations to the posterior marginal distribution of the Generalized Extreme Value (GEV) model by using annual maximum discharges in two major river basins in Bangladesh. The discharge records of the two largest branches of the Ganges-Brahmaputra-Meghna river system in Bangladesh for the past 42 years were analysed. To estimate flood risk, a return level with 95% confidence intervals (CI) has also been calculated. Results show that, the shape parameter of each station was greater than zero, which shows that heavy-tailed Frechet cases. One station, Bahadurabad, at Brahmaputra river basin estimated 141,387 m3s-1 with a 95% CI range of [112,636, 170,138] for 100-year return level and the 1000-year return level was 195,018 m3s-1 with a 95% CI of [122493, 267544]. The other station, Hardinge Bridge, at Ganges basin estimated 124,134 m3 s-1 with a 95% CI of [108,726, 139,543] for 100-year return level and the 1000-year return level was 170,537 m3s-1 with a 95% CI of [133,784, 207,289]. As Bangladesh is a flood prone country, the approach of Bayesian with HMC in EVA can help policy-makers to plan initiatives that could result in preventing damage to both lives and assets.


Atmosphere ◽  
2020 ◽  
Vol 11 (12) ◽  
pp. 1273
Author(s):  
Tosiyuki Nakaegawa ◽  
Takuro Kobashi ◽  
Hirotaka Kamahori

Extreme precipitation is no longer stationary under a changing climate due to the increase in greenhouse gas emissions. Nonstationarity must be considered when realistically estimating the amount of extreme precipitation for future prevention and mitigation. Extreme precipitation with a certain return level is usually estimated using extreme value analysis under a stationary climate assumption without evidence. In this study, the characteristics of extreme value statistics of annual maximum monthly precipitation in East Asia were evaluated using a nonstationary historical climate simulation with an Earth system model of intermediate complexity, capable of long-term integration over 12,000 years (i.e., the Holocene). The climatological means of the annual maximum monthly precipitation for each 100-year interval had nonstationary time series, and the ratios of the largest annual maximum monthly precipitation to the climatological mean had nonstationary time series with large spike variations. The extreme value analysis revealed that the annual maximum monthly precipitation with a return level of 100 years estimated for each 100-year interval also presented a nonstationary time series which was normally distributed and not autocorrelated, even with the preceding and following 100-year interval (lag 1). Wavelet analysis of this time series showed that significant periodicity was only detected in confined areas of the time–frequency space.


2021 ◽  
Author(s):  
Katharina Klehmet ◽  
Peter Berg ◽  
Denica Bozhinova ◽  
Louise Crochemore ◽  
Ilias Pechlivanidis ◽  
...  

<p>Robust information of hydrometeorological extremes is important for effective risk management, mitigation and adaptation measures by public authorities, civil and engineers dealing for example with water management. Typically, return values of certain variables, such as extreme precipitation and river discharge, are of particular interest and are modelled statistically using Extreme Value Theory (EVT). However, the estimation of these rare events based on extreme value analysis are affected by short observational data records leading to large uncertainties.</p><p>In order to overcome this limitation, we propose to use the latest seasonal meteorological prediction system of the European Centre for Medium-Range Weather Forecasts (ECMWF SEAS5) and seasonal hydrological forecasts generated with the pan-European E-HYPE model of the original period 1993-2015 and to extend the dataset to longer synthetic time series by pooling single forecast months to surrogate years. To ensure an independent dataset, the seasonal forecast skill is assessed in advance and months (and lead months) with positive skill are excluded. In this study, we simplify the method and work with samples of 6- and 4-month forecasts (instead of the full 7-month forecasts) depending on the statistical independency of the variables. It enables the record to be extended from the original 23 years to 3450 and 2300 surrogate years for the 6- and 4-month forecasts respectively.</p><p>Furthermore, we investigate the robustness of estimated 50- and 100-year return values for extreme precipitation and river discharge using 1-year block maxima that are fitted to the Generalized Extreme Value distribution. Surrogate sets of pooled years are randomly constructed using the Monte-Carlo approach and different sample sizes are chosen. This analysis reveals a considerable reduction in the uncertainty of all return period estimations for both variables for selected locations across Europe using a sample size of 500 years. This highlights the potential in using the ensembles of meteorological and hydrological seasonal forecasts to obtain timeseries of sufficient length and minimize the uncertainty in the extreme value analysis.</p>


2020 ◽  
Author(s):  
Nikos Koutsias ◽  
Frank A. Coutelieris

<p>A statistical analysis on the wildfire events, that have taken place in Greece during the period 1985-2007, for the assessment of the extremes has been performed. The total burned area of each fire was considered here as a key variable to express the significance of a given event. The data have been analyzed through the extreme value theory, which has been in general proved a powerful tool for the accurate assessment of the return period of extreme events. Both frequentist and Bayesian approaches have been used for comparison and evaluation purposes. Precisely, the Generalized Extreme Value (GEV) distribution along with Peaks over Threshold (POT) have been compared with the Bayesian Extreme Value modelling. Furthermore, the correlation of the burned area with the potential extreme values for other key parameters (e.g. wind, temperature, humidity, etc.) has been also investigated.</p>


Sign in / Sign up

Export Citation Format

Share Document