scholarly journals On the foundations of multivariate heavy-tail analysis

2004 ◽  
Vol 41 (A) ◽  
pp. 191-212 ◽  
Author(s):  
Sidney Resnick

Univariate heavy-tailed analysis rests on the analytic notion of regularly varying functions. For multivariate heavy-tailed analysis, reliance on functions is awkward because multivariate distribution functions are not natural objects for many purposes and are difficult to manipulate. An approach based on vague convergence of measures makes the differences between univariate and multivariate analysis evaporate. We survey the foundations of the subject and discuss statistical attempts to assess dependence of large values. An exploratory technique is applied to exchange rate return data and shows clear differences in the dependence structure of large values for the Japanese Yen versus German Mark compared with the French Franc versus the German Mark.

2004 ◽  
Vol 41 (A) ◽  
pp. 191-212 ◽  
Author(s):  
Sidney Resnick

Univariate heavy-tailed analysis rests on the analytic notion of regularly varying functions. For multivariate heavy-tailed analysis, reliance on functions is awkward because multivariate distribution functions are not natural objects for many purposes and are difficult to manipulate. An approach based on vague convergence of measures makes the differences between univariate and multivariate analysis evaporate. We survey the foundations of the subject and discuss statistical attempts to assess dependence of large values. An exploratory technique is applied to exchange rate return data and shows clear differences in the dependence structure of large values for the Japanese Yen versus German Mark compared with the French Franc versus the German Mark.


2011 ◽  
Vol 43 (2) ◽  
pp. 504-523 ◽  
Author(s):  
Yann Demichel ◽  
Anne Estrade ◽  
Marie Kratz ◽  
Gennady Samorodnitsky

The modeling of random bi-phasic, or porous, media has been, and still is, under active investigation by mathematicians, physicists, and physicians. In this paper we consider a thresholded random process X as a source of the two phases. The intervals when X is in a given phase, named chords, are the subject of interest. We focus on the study of the tails of the chord length distribution functions. In the literature concerned with real data, different types of tail behavior have been reported, among them exponential-like or power-like decay. We look for the link between the dependence structure of the underlying thresholded process X and the rate of decay of the chord length distribution. When the process X is a stationary Gaussian process, we relate the latter to the rate at which the covariance function of X decays at large lags. We show that exponential, or nearly exponential, decay of the tail of the distribution of the chord lengths is very common, perhaps surprisingly so.


2011 ◽  
Vol 43 (02) ◽  
pp. 504-523 ◽  
Author(s):  
Yann Demichel ◽  
Anne Estrade ◽  
Marie Kratz ◽  
Gennady Samorodnitsky

The modeling of random bi-phasic, or porous, media has been, and still is, under active investigation by mathematicians, physicists, and physicians. In this paper we consider a thresholded random process X as a source of the two phases. The intervals when X is in a given phase, named chords, are the subject of interest. We focus on the study of the tails of the chord length distribution functions. In the literature concerned with real data, different types of tail behavior have been reported, among them exponential-like or power-like decay. We look for the link between the dependence structure of the underlying thresholded process X and the rate of decay of the chord length distribution. When the process X is a stationary Gaussian process, we relate the latter to the rate at which the covariance function of X decays at large lags. We show that exponential, or nearly exponential, decay of the tail of the distribution of the chord lengths is very common, perhaps surprisingly so.


Author(s):  
Stefan Thurner ◽  
Rudolf Hanel ◽  
Peter Klimekl

Phenomena, systems, and processes are rarely purely deterministic, but contain stochastic,probabilistic, or random components. For that reason, a probabilistic descriptionof most phenomena is necessary. Probability theory provides us with the tools for thistask. Here, we provide a crash course on the most important notions of probabilityand random processes, such as odds, probability, expectation, variance, and so on. Wedescribe the most elementary stochastic event—the trial—and develop the notion of urnmodels. We discuss basic facts about random variables and the elementary operationsthat can be performed on them. We learn how to compose simple stochastic processesfrom elementary stochastic events, and discuss random processes as temporal sequencesof trials, such as Bernoulli and Markov processes. We touch upon the basic logic ofBayesian reasoning. We discuss a number of classical distribution functions, includingpower laws and other fat- or heavy-tailed distributions.


2021 ◽  
Author(s):  
Kai Chen ◽  
Twan van Laarhoven ◽  
Elena Marchiori

AbstractLong-term forecasting involves predicting a horizon that is far ahead of the last observation. It is a problem of high practical relevance, for instance for companies in order to decide upon expensive long-term investments. Despite the recent progress and success of Gaussian processes (GPs) based on spectral mixture kernels, long-term forecasting remains a challenging problem for these kernels because they decay exponentially at large horizons. This is mainly due to their use of a mixture of Gaussians to model spectral densities. Characteristics of the signal important for long-term forecasting can be unravelled by investigating the distribution of the Fourier coefficients of (the training part of) the signal, which is non-smooth, heavy-tailed, sparse, and skewed. The heavy tail and skewness characteristics of such distributions in the spectral domain allow to capture long-range covariance of the signal in the time domain. Motivated by these observations, we propose to model spectral densities using a skewed Laplace spectral mixture (SLSM) due to the skewness of its peaks, sparsity, non-smoothness, and heavy tail characteristics. By applying the inverse Fourier Transform to this spectral density we obtain a new GP kernel for long-term forecasting. In addition, we adapt the lottery ticket method, originally developed to prune weights of a neural network, to GPs in order to automatically select the number of kernel components. Results of extensive experiments, including a multivariate time series, show the beneficial effect of the proposed SLSM kernel for long-term extrapolation and robustness to the choice of the number of mixture components.


2016 ◽  
Vol 9 (2) ◽  
Author(s):  
Farrukh Javed ◽  
Krzysztof Podgórski

AbstractThe APARCH model attempts to capture asymmetric responses of volatility to positive and negative ‘news shocks’ – the phenomenon known as the leverage effect. Despite its potential, the model’s properties have not yet been fully investigated. While the capacity to account for the leverage is clear from the defining structure, little is known how the effect is quantified in terms of the model’s parameters. The same applies to the quantification of heavy-tailedness and dependence. To fill this void, we study the model in further detail. We study conditions of its existence in different metrics and obtain explicit characteristics: skewness, kurtosis, correlations and leverage. Utilizing these results, we analyze the roles of the parameters and discuss statistical inference. We also propose an extension of the model. Through theoretical results we demonstrate that the model can produce heavy-tailed data. We illustrate these properties using S&P500 data and country indices for dominant European economies.


1982 ◽  
Vol 19 (A) ◽  
pp. 359-365 ◽  
Author(s):  
David Pollard

The theory of weak convergence has developed into an extensive and useful, but technical, subject. One of its most important applications is in the study of empirical distribution functions: the explication of the asymptotic behavior of the Kolmogorov goodness-of-fit statistic is one of its greatest successes. In this article a simple method for understanding this aspect of the subject is sketched. The starting point is Doob's heuristic approach to the Kolmogorov-Smirnov theorems, and the rigorous justification of that approach offered by Donsker. The ideas can be carried over to other applications of weak convergence theory.


2021 ◽  
Vol 14 (5) ◽  
pp. 202
Author(s):  
Miriam Hägele ◽  
Jaakko Lehtomaa

Modern risk modelling approaches deal with vectors of multiple components. The components could be, for example, returns of financial instruments or losses within an insurance portfolio concerning different lines of business. One of the main problems is to decide if there is any type of dependence between the components of the vector and, if so, what type of dependence structure should be used for accurate modelling. We study a class of heavy-tailed multivariate random vectors under a non-parametric shape constraint on the tail decay rate. This class contains, for instance, elliptical distributions whose tail is in the intermediate heavy-tailed regime, which includes Weibull and lognormal type tails. The study derives asymptotic approximations for tail events of random walks. Consequently, a full large deviations principle is obtained under, essentially, minimal assumptions. As an application, an optimisation method for a large class of Quota Share (QS) risk sharing schemes used in insurance and finance is obtained.


2007 ◽  
Author(s):  
Αριστείδης Νικολουλόπουλος

Studying associations among multivariate outcomes is an interesting problem in statistical science. The dependence between random variables is completely described by their multivariate distribution. When the multivariate distribution has a simple form, standard methods can be used to make inference. On the other hand one may create multivariate distributions based on particular assumptions, limiting thus their use. Unfortunately, these limitations occur very often when working with multivariate discrete distributions. Some multivariate discrete distributions used in practice can have only certain properties, as for example they allow only for positive dependence or they can have marginal distributions of a given form. To solve this problem copulas seem to be a promising solution. Copulas are a currently fashionable way to model multivariate data as they account for the dependence structure and provide a flexible representation of the multivariate distribution. Furthermore, for copulas the dependence properties can be separated from their marginal properties and multivariate models with marginal densities of arbitrary form can be constructed, allowing a wide range of possible association structures. In fact they allow for flexible dependence modelling, different from assuming simple linear correlation structures. However, in the application of copulas to discrete data marginal parameters affect dependence structure, too, and, hence the dependence properties are not fully separated from the marginal properties. Introducing covariates to describe the dependence by modelling the copula parameters is of special interest in this thesis. Thus, covariate information can describe the dependence either indirectly through the marginalparameters or directly through the parameters of the copula . We examine the case when the covariates are used both in marginal and/or copula parameters aiming at creating a highly flexible model producing very elegant dependence structures. Furthermore, the literature contains many theoretical results and families of copulas with several properties but there are few papers that compare the copula families and discuss model selection issues among candidate copula models rendering the question of which copulas are appropriate and whether we are able, from real data, to select the true copula that generated the data, among a series of candidates with, perhaps, very similar dependence properties. We examined a large set of candidate copula families taking intoaccount properties like concordance and tail dependence. The comparison is made theoretically using Kullback-Leibler distances between them. We have selected this distance because it has a nice relationship with log-likelihood and thus it can provide interesting insight on the likelihood based procedures used in practice. Furthermore a goodness of fit test based on Mahalanobisdistance, which is computed through parametric bootstrap, will be provided. Moreover we adopt a model averaging approach on copula modelling, based on the non-parametric bootstrap. Our intention is not to underestimate variability but add some additional variability induced by model selection making the precision of the estimate unconditional on the selected model. Moreover our estimates are synthesize from several different candidate copula models and thus they can have a flexible dependence structure. Taking under consideration the extended literature of copula for multivariate continuous data we concentrated our interest on fitting copulas on multivariate discrete data. The applications of multivariate copula models for discrete data are limited. Usually we have to trade off between models with limited dependence (e.g. only positive association) and models with flexible dependence but computational intractabilities. For example, the elliptical copulas provide a wide range of flexible dependence, but do not have closed form cumulative distribution functions. Thus one needs to evaluate the multivariate copula and, hence, a multivariate integral repeatedly for a large number of times. This can be time consuming but also, because of the numerical approach used to evaluate a multivariate integral, it may produce roundoff errors. On the other hand, multivariate Archimedean copulas, partially-symmetric m-variate copulas with m − 1 dependence parameters and copulas that are mixtures of max-infinitely divisible bivariate copulas have closed form cumulative distribution functions and thus computations are easy, but allow only positive dependence among the random variables. The bridge of the two above-mentioned problems might be the definition of a copula family which has simple form for its distribution function while allowing for negative dependence among the variables. We define such a multivariate copula family exploiting the use of finite mixture of simple uncorrelated normal distributions. Since the correlation vanishes, the cumulative distribution is simply the product of univariate normal cumulative distribution functions. The mixing operation introduces dependence. Hence we obtain a kind of flexible dependence, and allow for negative dependence.


2001 ◽  
Vol 40 (4II) ◽  
pp. 885-897
Author(s):  
Razzaque H. Bhatti

Pak-rupee exchange rates vis-à-vis many currencies of the industrial world have weakened continuously and persistently since Pakistan abandoned fixed exchange rates in April 1982. This proposition is strongly supported by descriptive test statistics, as shown in Table 1, such as mean, standard deviation and coefficient of variation of six Pak rupee exchange rates—against the U.S. dollar, British pound, German mark, Japanese yen, Swiss franc and French franc—over the period 1982q1-2000q4. Based on these descriptive statistics, it is evident that Pak rupee has depreciated persistently against all currencies of the industrial countries in question over the period under investigation; for example, it has depreciated by 324.05 percent against the British pound, 406.360 percent against the U.S. dollar, 344.53 percent against the French franc, 498.48 percent against the Swiss franc, 477.78 percent against the German mark and 986.25 percent against the Japanese yen since April 1982. As evidenced by coefficient of variation, Pak rupee has weakened enormously against all currencies of the industrial world, while it has weakened relatively more alarmingly against the Japanese yen, Swiss franc and German mark.


Sign in / Sign up

Export Citation Format

Share Document