scholarly journals EXPECTED SHORTFALL DENGAN PENDEKATAN GLOSTEN-JAGANNATHAN-RUNKLE GARCH DAN GENERALIZED PARETO DISTRIBUTION

2020 ◽  
Vol 9 (4) ◽  
pp. 505-514
Author(s):  
Lina Tanasya ◽  
Di Asih I Maruddani ◽  
Tarno Tarno

Stock is a type of investment in financial assets that are many interested by investors. When investing, investors must calculate the expected return on stocks and notice risks that will occur. There are several methods can be used to measure the level of risk one of which is Value at Risk (VaR), but these method often doesn’t fulfill coherence as a risk measure because it doesn’t fulfill the nature of subadditivity. Therefore, the Expected Shortfall (ES) method is used to accommodate these weakness. Stock return data is time series data which has heteroscedasticity and heavy tailed, so time series models used to overcome the problem of heteroscedasticity is GARCH, while the theory for analyzing heavy tailed is Extreme Value Theory (EVT). In this study, there is also a leverage effect so used the asymmetric GARCH model with Glosten-Jagannathan-Runkle GARCH (GJR-GARCH) model and the EVT theory with Generalized Pareto Distribution (GPD) to calculate ES of the stock return from PT. Bank Central Asia Tbk for the period May 1, 2012-January 31, 2020. The best model chosen was ARIMA(1,0,1) GJR-GARCH(1,2). At the 95% confidence level, the risk obtained by investors using a combination of GJR-GARCH and GPD calculations for the next day is 0.7147% exceeding the VaR value of 0.6925%. 

Author(s):  
Meelis Käärik ◽  
Anastassia Žegulova

The estimation of certain loss distribution and analyzing its properties is a key issue in several finance mathematical and actuarial applications. It is common to apply the tools of extreme value theory and generalized Pareto distribution in problems related to heavy-tailed data. Our main goal is to study third party liability claims data obtained from Estonian Traffic Insurance Fund (ETIF). The data is quite typical for insurance claims containing very many observations and being heavy-tailed. In our approach the fitting consists of two parts: for main part of the distribution we use lognormal fit (which was the most suitable based on our previous studies) and a generalized Pareto distribution is used for the tail. Main emphasis of the fitting techniques is on the proper threshold selection. We seek for stability of parameter estimates and study the behaviour of risk measures at a wide range of thresholds. Two related lemmas will be proved.


2010 ◽  
Vol 7 (4) ◽  
pp. 4957-4994 ◽  
Author(s):  
R. Deidda

Abstract. Previous studies indicate the generalized Pareto distribution (GPD) as a suitable distribution function to reliably describe the exceedances of daily rainfall records above a proper optimum threshold, which should be selected as small as possible to retain the largest sample while assuring an acceptable fitting. Such an optimum threshold may differ from site to site, affecting consequently not only the GPD scale parameter, but also the probability of threshold exceedance. Thus a first objective of this paper is to derive some expressions to parameterize a simple threshold-invariant three-parameter distribution function which is able to describe zero and non zero values of rainfall time series by assuring a perfect overlapping with the GPD fitted on the exceedances of any threshold larger than the optimum one. Since the proposed distribution does not depend on the local thresholds adopted for fitting the GPD, it will only reflect the on-site climatic signature and thus appears particularly suitable for hydrological applications and regional analyses. A second objective is to develop and test the Multiple Threshold Method (MTM) to infer the parameters of interest on the exceedances of a wide range of thresholds using again the concept of parameters threshold-invariance. We show the ability of the MTM in fitting historical daily rainfall time series recorded with different resolutions. Finally, we prove the supremacy of the MTM fit against the standard single threshold fit, often adopted for partial duration series, by evaluating and comparing the performances on Monte Carlo samples drawn by GPDs with different shape and scale parameters and different discretizations.


Entropy ◽  
2020 ◽  
Vol 22 (1) ◽  
pp. 93 ◽  
Author(s):  
Jan Vrba ◽  
Jan Mareš

Recently, the concept of evaluating an unusually large learning effort of an adaptive system to detect novelties in the observed data was introduced. The present paper introduces a new measure of the learning effort of an adaptive system. The proposed method also uses adaptable parameters. Instead of a multi-scale enhanced approach, the generalized Pareto distribution is employed to estimate the probability of unusual updates, as well as for detecting novelties. This measure was successfully tested in various scenarios with (i) synthetic data, (ii) real time series datasets, and multiple adaptive filters and learning algorithms. The results of these experiments are presented.


2010 ◽  
Vol 14 (12) ◽  
pp. 2559-2575 ◽  
Author(s):  
R. Deidda

Abstract. Previous studies indicate the generalized Pareto distribution (GPD) as a suitable distribution function to reliably describe the exceedances of daily rainfall records above a proper optimum threshold, which should be selected as small as possible to retain the largest sample while assuring an acceptable fitting. Such an optimum threshold may differ from site to site, affecting consequently not only the GPD scale parameter, but also the probability of threshold exceedance. Thus a first objective of this paper is to derive some expressions to parameterize a simple threshold-invariant three-parameter distribution function which assures a perfect overlapping with the GPD fitted on the exceedances over any threshold larger than the optimum one. Since the proposed distribution does not depend on the local thresholds adopted for fitting the GPD, it is expected to reflect the on-site climatic signature and thus appears particularly suitable for hydrological applications and regional analyses. A second objective is to develop and test the Multiple Threshold Method (MTM) to infer the parameters of interest by using exceedances over a wide range of thresholds applying again the concept of parameters threshold-invariance. We show the ability of the MTM in fitting historical daily rainfall time series recorded with different resolutions and with a significative percentage of heavily quantized data. Finally, we prove the supremacy of the MTM fit against the standard single threshold fit, often adopted for partial duration series, by evaluating and comparing the performances on Monte Carlo samples drawn by GPDs with different shape and scale parameters and different discretizations.


2012 ◽  
Vol 1 (33) ◽  
pp. 42
Author(s):  
Pietro Bernardara ◽  
Franck Mazas ◽  
Jérôme Weiss ◽  
Marc Andreewsky ◽  
Xavier Kergadallan ◽  
...  

In the general framework of over-threshold modelling (OTM) for estimating extreme values of met-ocean variables, such as waves, surges or water levels, the threshold selection logically requires two steps: the physical declustering of time series of the variable in order to obtain samples of independent and identically distributed data then the application of the extreme value theory, which predicts the convergence of the upper part of the sample toward the Generalized Pareto Distribution. These two steps were often merged and confused in the past. A clear framework for distinguishing them is presented here. A review of the methods available in literature to carry out these two steps is given here together with the illustration of two simple and practical examples.


2019 ◽  
Author(s):  
Riccardo Zucca ◽  
Xerxes D. Arsiwalla ◽  
Hoang Le ◽  
Mikail Rubinov ◽  
Antoni Gurguí ◽  
...  

ABSTRACTAre degree distributions of human brain functional connectivity networks heavy-tailed? Initial claims based on least-square fitting suggested that brain functional connectivity networks obey power law scaling in their degree distributions. This interpretation has been challenged on methodological grounds. Subsequently, estimators based on maximum-likelihood and non-parametric tests involving surrogate data have been proposed. No clear consensus has emerged as results especially depended on data resolution. To identify the underlying topological distribution of brain functional connectivity calls for a closer examination of the relationship between resolution and statistics of model fitting. In this study, we analyze high-resolution functional magnetic resonance imaging (fMRI) data from the Human Connectome Project to assess its degree distribution across resolutions. We consider resolutions from one thousand to eighty thousand regions of interest (ROIs) and test whether they follow a heavy or short-tailed distribution. We analyze power law, exponential, truncated power law, log-normal, Weibull and generalized Pareto probability distributions. Notably, the Generalized Pareto distribution is of particular interest since it interpolates between heavy-tailed and short-tailed distributions, and it provides a handle on estimating the tail’s heaviness or shortness directly from the data. Our results show that the statistics support the short-tailed limit of the generalized Pareto distribution, rather than a power law or any other heavy-tailed distribution. Working across resolutions of the data and performing cross-model comparisons, we further establish the overall robustness of the generalized Pareto model in explaining the data. Moreover, we account for earlier ambiguities by showing that down-sampling the data systematically affects statistical results. At lower resolutions models cannot easily be differentiated on statistical grounds while their plausibility consistently increases up to an upper bound. Indeed, more power law distributions are reported at low resolutions (5K) than at higher ones (50K or 80K). However, we show that these positive identifications at low resolutions fail cross-model comparisons and that down-sampling data introduces the risk of detecting spurious heavy-tailed distributions. This dependence of the statistics of degree distributions on sampling resolution has broader implications for neuroinformatic methodology, especially, when several analyses rely on down-sampled data, for instance, due to a choice of anatomical parcellations or measurement technique. Our findings that node degrees of human brain functional networks follow a short-tailed distribution have important implications for claims of brain organization and function. Our findings do not support common simplistic representations of the brain as a generic complex system with optimally efficient architecture and function, modeled with simple growth mechanisms. Instead these findings reflect a more nuanced picture of a biological system that has been shaped by longstanding and pervasive developmental and architectural constraints, including wiring-cost constraints on the centrality architecture of individual nodes.


2019 ◽  
Vol 17 (4) ◽  
pp. 56
Author(s):  
Jaime Enrique Lincovil ◽  
Chang Chiann

<p>Evaluating forecasts of risk measures, such as value–at–risk (VaR) and expected shortfall (ES), is an important process for financial institutions. Backtesting procedures were introduced to assess the efficiency of these forecasts. In this paper, we compare the empirical power of new classes of backtesting, for VaR and ES, from the statistical literature. Further, we employ these procedures to evaluate the efficiency of the forecasts generated by both the Historical Simulation method and two methods based on the Generalized Pareto Distribution. To evaluate VaR forecasts, the empirical power of the Geometric–VaR class of backtesting was, in general, higher than that of other tests in the simulated scenarios. This supports the advantages of using defined time periods and covariates in the test procedures. On the other hand, to evaluate ES forecasts, backtesting methods based on the conditional distribution of returns to the VaR performed well with large sample sizes. Additionally, we show that the method based on the generalized Pareto distribution using durations and covariates has optimal performance in forecasts of VaR and ES, according to backtesting.</p>


2020 ◽  
Author(s):  
Pauline Rivoire ◽  
Olivia Martius ◽  
Philippe Naveau

&lt;p&gt;Both mean and extreme precipitation are highly relevant and a probability distribution that models the entire precipitation distribution therefore provides important information. Very low and extremely high precipitation amounts have traditionally been modeled separately. Gamma distributions are often used to model low and moderate precipitation amounts and extreme value theory allows to model the upper tail of the distribution. However, difficulties arise when making a link between upper and lower tail. One solution is to define a threshold that separates the distribution into extreme and non-extreme values, but the assignment of such a threshold for many locations is not trivial.&amp;#160;&lt;/p&gt;&lt;p&gt;Here we apply the Extended Generalized Pareto Distribution (EGPD) used by Tencaliec &amp; al. 2019. This method overcomes the problem of finding a threshold between upper and lower tails thanks to a transition function (G) that describes the transition between the empirical distribution of precipitation and a Pareto distribution. The transition cumulative distribution function G has to be constrained by the upper tail and lower tail behavior. G can be estimated using Bernstein polynomials.&lt;/p&gt;&lt;p&gt;EGPD is used here to characterize ERA-5 precipitation. ERA-5 is a new ECMWF climate re-analysis dataset that provides a numerical description of the recent climate by combining a numerical weather model with observations. The data set is global with a spatial resolution of 0.25&amp;#176; and currently covers the period from 1979 to present.&lt;/p&gt;&lt;p&gt;ERA-5 daily precipitation is compared to EOBS, a gridded dataset spatially interpolated from observations over Europe, and to CMORPH, a satellite-based global precipitation product. Simultaneous occurrence of extreme events is assessed with a hit rate. An intensity comparison is conducted with return levels confidence intervals and a Kullback Leibler divergence test, both derived from the EGPD.&lt;/p&gt;&lt;p&gt;Overall, extreme event occurrences between ERA5 and EOBS over Europe appear to agree. The presence of overlap between 95% confidence intervals on return levels highly depends on the season and the probability of occurrence.&lt;/p&gt;


Sign in / Sign up

Export Citation Format

Share Document