Frequentist and Bayesian extreme value analysis on the wildfire events in Greece

Author(s):  
Nikos Koutsias ◽  
Frank A. Coutelieris

<p>A statistical analysis on the wildfire events, that have taken place in Greece during the period 1985-2007, for the assessment of the extremes has been performed. The total burned area of each fire was considered here as a key variable to express the significance of a given event. The data have been analyzed through the extreme value theory, which has been in general proved a powerful tool for the accurate assessment of the return period of extreme events. Both frequentist and Bayesian approaches have been used for comparison and evaluation purposes. Precisely, the Generalized Extreme Value (GEV) distribution along with Peaks over Threshold (POT) have been compared with the Bayesian Extreme Value modelling. Furthermore, the correlation of the burned area with the potential extreme values for other key parameters (e.g. wind, temperature, humidity, etc.) has been also investigated.</p>

2021 ◽  
Author(s):  
Anne Dutfoy ◽  
Gloria Senfaute

Abstract Probabilistic Seismic Hazard Analysis (PSHA) procedures require that at least the mean activity rate be known, as well as the distribution of magnitudes. Within the Gutenberg-Richter assumption, that distribution is an Exponential distribution, upperly truncated to a maximum possible magnitude denoted $m_{max}$. This parameter is often fixed from expert judgement under tectonics considerations, due to a lack of universal method. In this paper, we propose two innovative alternatives to the Gutenberg-Richter model, based on the Extreme Value Theory and that don't require to fix a priori the value of $m_{max}$: the first one models the tail distribution magnitudes with a Generalized Pareto Distribution; the second one is a variation on the usual Gutenberg-Richter model where $m_{max}$ is a random variable that follows a distribution defined from an extreme value analysis. We use the maximum likelihood estimators taking into account the unequal observation spans depending on magnitude, the incompleteness threshold of the catalog and the uncertainty in the magnitude value itself. We apply these new recurrence models on the data observed in the Alps region, in the south of France and we integrate them into a probabilistic seismic hazard calculation to evaluate their impact on the seismic hazard levels. The proposed new recurrence models introduce a reduction of the seismic hazard level compared to the common Gutenberg-Richter model conventionally used for PSHA calculations. This decrease is significant for all frequencies below 10 Hz, mainly at the lowest frequencies and for very long return periods. To our knowledge, both new models have never been used in a probabilistic seismic hazard calculation and constitute a new promising generation of recurrence models.


2011 ◽  
Vol 41 (9) ◽  
pp. 1836-1851 ◽  
Author(s):  
Yueyang Jiang ◽  
Qianlai Zhuang

Large fires are a major disturbance in Canadian forests and exert significant effects on both the climate system and ecosystems. During the last century, extremely large fires accounted for the majority of Canadian burned area. By making an instaneous change over a vast area of ecosystems, extreme fires often have significant social, economic, and ecological consequences. Since extreme values of fire size always situate in the upper tail of a cumulative probability distribution, the mean and variance alone are not sufficient to fully characterize those extreme events. To characterize the large fire behaviors in the upper tail, the authors in this study applied three extreme value distribution functions: (i) the generalized extreme value (GEV) distribution, (ii) the generalized Pareto distribution (GPD), and (iii) the GEV distribution with a Poisson point process (PP) representation to fit the Canadian historical fire data of the period 1959–2010. The analysis was conducted with the whole data set and different portions of the data set according to ignition sources (lightning-caused or human-caused) and ecozone classification. It is found that (i) all three extreme statistical models perform well to characterize extreme fire events, but the GPD and PP models need extra care to fit the nonstationary fire data, (ii) anthropogenic and natural extreme fires have significantly different extreme statistics, and (iii) fires in different ecozones exhibit very different characteristics in the view of statistics. Further, estimated fire return levels are comparable with observations in terms of the magnitude and frequency of an extreme event. These statistics of extreme values provide valuable information for future quantification of large fire risks and forest management in the region.


2020 ◽  
Vol 37 (5) ◽  
pp. 873-888 ◽  
Author(s):  
Jesús Portilla-Yandún ◽  
Edwin Jácome

AbstractAn important requirement in extreme value analysis (EVA) is for the working variable to be identically distributed. However, this is typically not the case in wind waves, because energy components with different origins belong to separate data populations, with different statistical properties. Although this information is available in the wave spectrum, the working variable in EVA is typically the total significant wave height Hs, a parameter that does not contain information of the spectral energy distribution, and therefore does not fulfill this requirement. To gain insight in this aspect, we develop here a covariate EVA application based on spectral partitioning. We observe that in general the total Hs is inappropriate for EVA, leading to potential over- or underestimation of the projected extremes. This is illustrated with three representative cases under significantly different wave climate conditions. It is shown that the covariate analysis provides a meaningful understanding of the individual behavior of the wave components, in regard to the consequences for projecting extreme values.


2019 ◽  
Vol 34 (2) ◽  
pp. 200-220
Author(s):  
Jingjing Zou ◽  
Richard A. Davis ◽  
Gennady Samorodnitsky

AbstractIn this paper, we are concerned with the analysis of heavy-tailed data when a portion of the extreme values is unavailable. This research was motivated by an analysis of the degree distributions in a large social network. The degree distributions of such networks tend to have power law behavior in the tails. We focus on the Hill estimator, which plays a starring role in heavy-tailed modeling. The Hill estimator for these data exhibited a smooth and increasing “sample path” as a function of the number of upper order statistics used in constructing the estimator. This behavior became more apparent as we artificially removed more of the upper order statistics. Building on this observation we introduce a new version of the Hill estimator. It is a function of the number of the upper order statistics used in the estimation, but also depends on the number of unavailable extreme values. We establish functional convergence of the normalized Hill estimator to a Gaussian process. An estimation procedure is developed based on the limit theory to estimate the number of missing extremes and extreme value parameters including the tail index and the bias of Hill's estimator. We illustrate how this approach works in both simulations and real data examples.


Author(s):  
Szilárd Bozóki ◽  
András Pataricza

Proper timeliness is vital for a lot of real-world computing systems. Understanding the phenomena of extreme workloads is essential because unhandled, extreme workloads could cause violation of timeliness requirements, service degradation, and even downtime. Extremity can have multiple roots: (1) service requests can naturally produce extreme workloads; (2) bursts could randomly occur on a probabilistic basis in case of a mixed workload in multiservice systems; (3) workload spikes typically happen in deadline bound tasks.Extreme Value Analysis (EVA) is a statistical method for modeling the extremely deviant values corresponding to the largest values. The foundation mathematics of EVA, the Extreme Value Theorem, requires the dataset to be independent and identically distributed. However, this is not generally true in practice because, usually, real-life processes are a mixture of sources with identifiable patterns. For example, seasonality and periodic fluctuations are regularly occurring patterns. Deadlines can be purely periodic, e.g., monthly tax submissions, or time variable, e.g., university homework submission with variable semester time schedules.We propose to preprocess the data using time series decomposition to separate the stochastic process causing extreme values. Moreover, we focus on the case where the root cause of the extreme values is the same mechanism: a deadline. We exploit known deadlines using dynamic time warp to search for the recurring similar workload peak patterns varying in time and amplitude.


2015 ◽  
Vol 60 (206) ◽  
pp. 87-116 ◽  
Author(s):  
Julija Cerovic ◽  
Vesna Karadzic

The concept of Value at Risk(VaR) estimates the maximum loss of a financial position at a given time for a given probability. This paper considers the adequacy of the methods that are the basis of extreme value theory in the Montenegrin emerging market before and during the global financial crisis. In particular, the purpose of the paper is to investigate whether the peaks-over-threshold method outperforms the block maxima method in evaluation of Value at Risk in emerging stock markets such as the Montenegrin market. The daily return of the Montenegrin stock market index MONEX20 is analyzed for the period January 2004 - February 2014. Results of the Kupiec test show that the peaks-over-threshold method is significantly better than the block maxima method, but both methods fail to pass the Christoffersen independence test and joint test due to the lack of accuracy in exception clustering when measuring Value at Risk. Although better, the peaks-over-threshold method still cannot be treated as an accurate VaR model for the Montenegrin frontier stock market.


2021 ◽  
Author(s):  
Katharina Klehmet ◽  
Peter Berg ◽  
Denica Bozhinova ◽  
Louise Crochemore ◽  
Ilias Pechlivanidis ◽  
...  

<p>Robust information of hydrometeorological extremes is important for effective risk management, mitigation and adaptation measures by public authorities, civil and engineers dealing for example with water management. Typically, return values of certain variables, such as extreme precipitation and river discharge, are of particular interest and are modelled statistically using Extreme Value Theory (EVT). However, the estimation of these rare events based on extreme value analysis are affected by short observational data records leading to large uncertainties.</p><p>In order to overcome this limitation, we propose to use the latest seasonal meteorological prediction system of the European Centre for Medium-Range Weather Forecasts (ECMWF SEAS5) and seasonal hydrological forecasts generated with the pan-European E-HYPE model of the original period 1993-2015 and to extend the dataset to longer synthetic time series by pooling single forecast months to surrogate years. To ensure an independent dataset, the seasonal forecast skill is assessed in advance and months (and lead months) with positive skill are excluded. In this study, we simplify the method and work with samples of 6- and 4-month forecasts (instead of the full 7-month forecasts) depending on the statistical independency of the variables. It enables the record to be extended from the original 23 years to 3450 and 2300 surrogate years for the 6- and 4-month forecasts respectively.</p><p>Furthermore, we investigate the robustness of estimated 50- and 100-year return values for extreme precipitation and river discharge using 1-year block maxima that are fitted to the Generalized Extreme Value distribution. Surrogate sets of pooled years are randomly constructed using the Monte-Carlo approach and different sample sizes are chosen. This analysis reveals a considerable reduction in the uncertainty of all return period estimations for both variables for selected locations across Europe using a sample size of 500 years. This highlights the potential in using the ensembles of meteorological and hydrological seasonal forecasts to obtain timeseries of sufficient length and minimize the uncertainty in the extreme value analysis.</p>


2007 ◽  
Vol 10 (06) ◽  
pp. 1043-1075 ◽  
Author(s):  
CARLO MARINELLI ◽  
STEFANO D'ADDONA ◽  
SVETLOZAR T. RACHEV

We compare in a backtesting study the performance of univariate models for Value-at-Risk (VaR) and expected shortfall based on stable laws and on extreme value theory (EVT). Analyzing these different approaches, we test whether the sum–stability assumption or the max–stability assumption, that respectively imply α–stable laws and Generalized Extreme Value (GEV) distributions, is more suitable for risk management based on VaR and expected shortfall. Our numerical results indicate that α–stable models tend to outperform pure EVT-based methods (especially those obtained by the so-called block maxima method) in the estimation of Value-at-Risk, while a peaks-over-threshold method turns out to be preferable for the estimation of expected shortfall. We also find empirical evidence that some simple semiparametric EVT-based methods perform well in the estimation of VaR.


2018 ◽  
Vol 18 (10) ◽  
pp. 2641-2651 ◽  
Author(s):  
Guillaume Evin ◽  
Thomas Curt ◽  
Nicolas Eckert

Abstract. Very large wildfires have high human, economic, and ecological impacts so that robust evaluation of their return period is crucial. Preventing such events is a major objective of the new fire policy set up in France in 1994, which is oriented towards fast and massive fire suppression. Whereas this policy is probably efficient for reducing the mean burned area (BA), its effect on the largest fires is still unknown. In this study, we make use of statistical extreme value theory (EVT) to compute return periods of very large BAs in southern France, for two distinct periods (1973 to 1994 and 1995 to 2016) and for three pyroclimatic regions characterized by specific fire activities. Bayesian inference and related predictive simulations are used to fairly evaluate related uncertainties. Results demonstrate that the BA corresponding to a return period of 5 years has actually significantly decreased, but that this is not the case for large return periods (e.g., 50 years). For example, in the most fire-prone region, which includes Corsica and Provence, the median 5-year return level decreased from 5000 to 2400 ha, while the median 50-year return level decreased only from 17 800 to 12 500 ha. This finding is coherent with the recent occurrence of conflagrations of large and intense fires clearly far beyond the suppression capacity of firemen. These fires may belong to a new generation of fires promoted by long-term fuel accumulation, urbanization into the wildland, and ongoing climate change. These findings may help adapt the operational system of fire prevention and suppression to ongoing changes. Also, the proposed methodology may be useful for other case studies worldwide.


Sign in / Sign up

Export Citation Format

Share Document