scholarly journals Булевозначный подход к анализу условного риска

Author(s):  
J.M. Zapata

By means of the techniques of Boolean valued analysis, we provide a transfer principle between duality theory of classical convex risk measures and duality theory of conditional risk measures. Namely, aconditional risk measure can be interpreted as a classical convex risk measure within asuitable set-theoretic model. As a consequence, many properties of a conditional risk measure can be interpreted as basic properties of convex risk measures. This amounts to a method to interpret a theorem ofdual representation of convex risk measures as a new theorem of dual representation of conditional risk measures. As an instance of application, we establish a general robust representation theorem for conditional risk measures and study different particular cases of it.

2011 ◽  
Vol 14 (01) ◽  
pp. 163-185 ◽  
Author(s):  
MARCO FRITTELLI ◽  
EMANUELA ROSAZZA GIANIN

We discuss two issues about risk measures: we first point out an alternative interpretation of the penalty function in the dual representation of a risk measure; then we analyze the continuity properties of comonotone convex risk measures. In particular, due to the loss of convexity, local and global continuity are no more equivalent and many implications true for convex risk measures do not hold any more.


2014 ◽  
Vol 57 (8) ◽  
pp. 1753-1764 ◽  
Author(s):  
TieXin Guo ◽  
ShiEn Zhao ◽  
XiaoLin Zeng

2015 ◽  
Author(s):  
Γεώργιος Παπαγιάννης

The main aim of the present thesis is to investigate the effect of diverging priors concerning model uncertainty on decision making. One of the main issues in the thesis is to assess the effect of different notions of distance in the space of probability measures and their use as loss functionals in the process of identifying the best suited model among a set of plausible priors. Another issue, is that of addressing the problem of ``inhomogeneous" sets of priors, i.e. sets of priors that highly divergent opinions may occur, and the need to robustly treat that case. As high degrees of inhomogeneity may lead to distrust of the decision maker to the priors it may be desirable to adopt a particular prior corresponding to the set which somehow minimizes the ``variability" among the models on the set. This leads to the notion of Frechet risk measure. Finally, an important problem is the actual calculation of robust risk measures. An account of their variational definition, the problem of calculation leads to the numerical treatment of problems of the calculus of variations for which reliable and effective algorithms are proposed. The contributions of the thesis are presented in the following three chapters. In Chapter 2, a statistical learning scheme is introduced for constructing the best model compatible with a set of priors provided by different information sources of varying reliability. As various priors may model well different aspects of the phenomenon the proposed scheme is a variational scheme based on the minimization of a weighted loss function in the space of probability measures which in certain cases is shown to be equivalent to weighted quantile averaging schemes. Therefore in contrast to approaches such as minimax decision theory in which a particular element of the prior set is chosen we construct for each prior set a probability measure which is not necessarily an element of it, a fact that as shown may lead to better description of the phenomenon in question. While treating this problem we also address the issue of the effect of the choice of distance functional in the space of measures on the problem of model selection. One of the key findings in this respect is that the class of Wasserstein distances seems to have the best performance as compared to other distances such as the KL-divergence. In Chapter 3, motivated by the results of Chapter 2, we treat the problem of specifying the risk measure for a particular loss when a set of highly divergent priors concerning the distribution of the loss is available. Starting from the principle that the ``variability" of opinions is not welcome, a fact for which a strong axiomatic framework is provided (see e.g. Klibanoff (2005) and references therein) we introduce the concept of Frechet risk measures, which corresponds to a minimal variance risk measure. Here we view a set of priors as a discrete measure on the space of probability measures and by variance we mean the variance of this discrete probability measure. This requires the use of the concept of Frechet mean. By different metrizations of the space of probability measures we define a variety of Frechet risk measures, the Wasserstein, the Hellinger and the weighted entropic risk measure, and illustrate their use and performance via an example related to the static hedging of derivatives under model uncertainty. In Chapter 4, we consider the problem of numerical calculation of convex risk measures applying techniques from the calculus of variations. Regularization schemes are proposed and the theoretical convergence of the algorithms is considered.


2022 ◽  
Author(s):  
Zachary J. Smith ◽  
J. Eric Bickel

In Weighted Scoring Rules and Convex Risk Measures, Dr. Zachary J. Smith and Prof. J. Eric Bickel (both at the University of Texas at Austin) present a general connection between weighted proper scoring rules and investment decisions involving the minimization of a convex risk measure. Weighted scoring rules are quantitative tools for evaluating the accuracy of probabilistic forecasts relative to a baseline distribution. In their paper, the authors demonstrate that the relationship between convex risk measures and weighted scoring rules relates closely with previous economic characterizations of weighted scores based on expected utility maximization. As illustrative examples, the authors study two families of weighted scoring rules based on phi-divergences (generalizations of the Weighted Power and Weighted Pseudospherical Scoring rules) along with their corresponding risk measures. The paper will be of particular interest to the decision analysis and mathematical finance communities as well as those interested in the elicitation and evaluation of subjective probabilistic forecasts.


2016 ◽  
Vol 4 (1) ◽  
Author(s):  
Silvana M. Pesenti ◽  
Pietro Millossovich ◽  
Andreas Tsanakas

AbstractOne of risk measures’ key purposes is to consistently rank and distinguish between different risk profiles. From a practical perspective, a risk measure should also be robust, that is, insensitive to small perturbations in input assumptions. It is known in the literature [14, 39], that strong assumptions on the risk measure’s ability to distinguish between risks may lead to a lack of robustness. We address the trade-off between robustness and consistent risk ranking by specifying the regions in the space of distribution functions, where law-invariant convex risk measures are indeed robust. Examples include the set of random variables with bounded second moment and those that are less volatile (in convex order) than random variables in a given uniformly integrable set. Typically, a risk measure is evaluated on the output of an aggregation function defined on a set of random input vectors. Extending the definition of robustness to this setting, we find that law-invariant convex risk measures are robust for any aggregation function that satisfies a linear growth condition in the tail, provided that the set of possible marginals is uniformly integrable. Thus, we obtain that all law-invariant convex risk measures possess the aggregation-robustness property introduced by [26] and further studied by [40]. This is in contrast to the widely-used, non-convex, risk measure Value-at-Risk, whose robustness in a risk aggregation context requires restricting the possible dependence structures of the input vectors.


Sign in / Sign up

Export Citation Format

Share Document