scholarly journals Spatial Risk Measures and Rate of Spatial Diversification

Risks ◽  
2019 ◽  
Vol 7 (2) ◽  
pp. 52 ◽  
Author(s):  
Erwan Koch

An accurate assessment of the risk of extreme environmental events is of great importance for populations, authorities and the banking/insurance/reinsurance industry. Koch (2017) introduced a notion of spatial risk measure and a corresponding set of axioms which are well suited to analyze the risk due to events having a spatial extent, precisely such as environmental phenomena. The axiom of asymptotic spatial homogeneity is of particular interest since it allows one to quantify the rate of spatial diversification when the region under consideration becomes large. In this paper, we first investigate the general concepts of spatial risk measures and corresponding axioms further and thoroughly explain the usefulness of this theory for both actuarial science and practice. Second, in the case of a general cost field, we give sufficient conditions such that spatial risk measures associated with expectation, variance, value-at-risk as well as expected shortfall and induced by this cost field satisfy the axioms of asymptotic spatial homogeneity of order 0, −2, −1 and −1, respectively. Last but not least, in the case where the cost field is a function of a max-stable random field, we provide conditions on both the function and the max-stable field ensuring the latter properties. Max-stable random fields are relevant when assessing the risk of extreme events since they appear as a natural extension of multivariate extreme-value theory to the level of random fields. Overall, this paper improves our understanding of spatial risk measures as well as of their properties with respect to the space variable and generalizes many results obtained in Koch (2017).

Author(s):  
RENATO PELESSONI ◽  
PAOLO VICIG

In this paper the theory of coherent imprecise previsions is applied to risk measurement. We introduce the notion of coherent risk measure defined on an arbitrary set of risks, showing that it can be considered a special case of coherent upper prevision. We also prove that our definition generalizes the notion of coherence for risk measures defined on a linear space of random numbers, given in literature. Consistency properties of Value-at-Risk (VaR), currently one of the most used risk measures, are investigated too, showing that it does not necessarily satisfy a weaker notion of consistency called 'avoiding sure loss'. We introduce sufficient conditions for VaR to avoid sure loss and to be coherent. Finally we discuss ways of modifying incoherent risk measures into coherent ones.


2020 ◽  
Vol 50 (2) ◽  
pp. 647-673
Author(s):  
Haiyan Liu

AbstractWe study a weighted comonotonic risk-sharing problem among multiple agents with distortion risk measures under heterogeneous beliefs. The explicit forms of optimal allocations are obtained, which are Pareto-optimal. A necessary and sufficient condition is given to ensure the uniqueness of the optimal allocation, and sufficient conditions are given to obtain an optimal allocation of the form of excess of loss or full insurance. The optimal allocation may satisfy individual rationality depending on the choice of the weight. When the distortion risk measure is value at risk or tail value at risk, an optimal allocation is generally of the excess-of-loss form. The numerical examples suggest that a risk is more likely to be shared among agents with heterogeneous beliefs, and the introduction of the weight enables us to prioritize some agents as part of a group sharing a risk.


Author(s):  
Nicole Bäuerle ◽  
Alexander Glauner

AbstractWe study the minimization of a spectral risk measure of the total discounted cost generated by a Markov Decision Process (MDP) over a finite or infinite planning horizon. The MDP is assumed to have Borel state and action spaces and the cost function may be unbounded above. The optimization problem is split into two minimization problems using an infimum representation for spectral risk measures. We show that the inner minimization problem can be solved as an ordinary MDP on an extended state space and give sufficient conditions under which an optimal policy exists. Regarding the infinite dimensional outer minimization problem, we prove the existence of a solution and derive an algorithm for its numerical approximation. Our results include the findings in Bäuerle and Ott (Math Methods Oper Res 74(3):361–379, 2011) in the special case that the risk measure is Expected Shortfall. As an application, we present a dynamic extension of the classical static optimal reinsurance problem, where an insurance company minimizes its cost of capital.


2021 ◽  
Vol 14 (5) ◽  
pp. 201
Author(s):  
Yuan Hu ◽  
W. Brent Lindquist ◽  
Svetlozar T. Rachev

This paper investigates performance attribution measures as a basis for constraining portfolio optimization. We employ optimizations that minimize conditional value-at-risk and investigate two performance attributes, asset allocation (AA) and the selection effect (SE), as constraints on asset weights. The test portfolio consists of stocks from the Dow Jones Industrial Average index. Values for the performance attributes are established relative to two benchmarks, equi-weighted and price-weighted portfolios of the same stocks. Performance of the optimized portfolios is judged using comparisons of cumulative price and the risk-measures: maximum drawdown, Sharpe ratio, Sortino–Satchell ratio and Rachev ratio. The results suggest that achieving SE performance thresholds requires larger turnover values than that required for achieving comparable AA thresholds. The results also suggest a positive role in price and risk-measure performance for the imposition of constraints on AA and SE.


2009 ◽  
Vol 39 (2) ◽  
pp. 591-613 ◽  
Author(s):  
Andreas Kull

AbstractWe revisit the relative retention problem originally introduced by de Finetti using concepts recently developed in risk theory and quantitative risk management. Instead of using the Variance as a risk measure we consider the Expected Shortfall (Tail-Value-at-Risk) and include capital costs and take constraints on risk capital into account. Starting from a risk-based capital allocation, the paper presents an optimization scheme for sharing risk in a multi-risk class environment. Risk sharing takes place between two portfolios and the pricing of risktransfer reflects both portfolio structures. This allows us to shed more light on the question of how optimal risk sharing is characterized in a situation where risk transfer takes place between parties employing similar risk and performance measures. Recent developments in the regulatory domain (‘risk-based supervision’) pushing for common, insurance industry-wide risk measures underline the importance of this question. The paper includes a simple non-life insurance example illustrating optimal risk transfer in terms of retentions of common reinsurance structures.


2021 ◽  
Vol 14 (11) ◽  
pp. 540
Author(s):  
Eyden Samunderu ◽  
Yvonne T. Murahwa

Developments in the world of finance have led the authors to assess the adequacy of using the normal distribution assumptions alone in measuring risk. Cushioning against risk has always created a plethora of complexities and challenges; hence, this paper attempts to analyse statistical properties of various risk measures in a not normal distribution and provide a financial blueprint on how to manage risk. It is assumed that using old assumptions of normality alone in a distribution is not as accurate, which has led to the use of models that do not give accurate risk measures. Our empirical design of study firstly examined an overview of the use of returns in measuring risk and an assessment of the current financial environment. As an alternative to conventional measures, our paper employs a mosaic of risk techniques in order to ascertain the fact that there is no one universal risk measure. The next step involved looking at the current risk proxy measures adopted, such as the Gaussian-based, value at risk (VaR) measure. Furthermore, the authors analysed multiple alternative approaches that do not take into account the normality assumption, such as other variations of VaR, as well as econometric models that can be used in risk measurement and forecasting. Value at risk (VaR) is a widely used measure of financial risk, which provides a way of quantifying and managing the risk of a portfolio. Arguably, VaR represents the most important tool for evaluating market risk as one of the several threats to the global financial system. Upon carrying out an extensive literature review, a data set was applied which was composed of three main asset classes: bonds, equities and hedge funds. The first part was to determine to what extent returns are not normally distributed. After testing the hypothesis, it was found that the majority of returns are not normally distributed but instead exhibit skewness and kurtosis greater or less than three. The study then applied various VaR methods to measure risk in order to determine the most efficient ones. Different timelines were used to carry out stressed value at risks, and it was seen that during periods of crisis, the volatility of asset returns was higher. The other steps that followed examined the relationship of the variables, correlation tests and time series analysis conducted and led to the forecasting of the returns. It was noted that these methods could not be used in isolation. We adopted the use of a mosaic of all the methods from the VaR measures, which included studying the behaviour and relation of assets with each other. Furthermore, we also examined the environment as a whole, then applied forecasting models to accurately value returns; this gave a much more accurate and relevant risk measure as compared to the initial assumption of normality.


2012 ◽  
Vol 3 (1) ◽  
pp. 150-157 ◽  
Author(s):  
Suresh Andrew Sethi ◽  
Mike Dalton

Abstract Traditional measures that quantify variation in natural resource systems include both upside and downside deviations as contributing to variability, such as standard deviation or the coefficient of variation. Here we introduce three risk measures from investment theory, which quantify variability in natural resource systems by analyzing either upside or downside outcomes and typical or extreme outcomes separately: semideviation, conditional value-at-risk, and probability of ruin. Risk measures can be custom tailored to frame variability as a performance measure in terms directly meaningful to specific management objectives, such as presenting risk as harvest expected in an extreme bad year, or by characterizing risk as the probability of fishery escapement falling below a prescribed threshold. In this paper, we present formulae, empirical examples from commercial fisheries, and R code to calculate three risk measures. In addition, we evaluated risk measure performance with simulated data, and we found that risk measures can provide unbiased estimates at small sample sizes. By decomposing complex variability into quantitative metrics, we envision risk measures to be useful across a range of wildlife management scenarios, including policy decision analyses, comparative analyses across systems, and tracking the state of natural resource systems through time.


2019 ◽  
Vol 12 (4) ◽  
pp. 159 ◽  
Author(s):  
Yuyang Cheng ◽  
Marcos Escobar-Anel ◽  
Zhenxian Gong

This paper proposes and investigates a multivariate 4/2 Factor Model. The name 4/2 comes from the superposition of a CIR term and a 3/2-model component. Our model goes multidimensional along the lines of a principal component and factor covariance decomposition. We find conditions for well-defined changes of measure and we also find two key characteristic functions in closed-form, which help with pricing and risk measure calculations. In a numerical example, we demonstrate the significant impact of the newly added 3/2 component (parameter b) and the common factor (a), both with respect to changes on the implied volatility surface (up to 100%) and on two risk measures: value at risk and expected shortfall where an increase of up to 29% was detected.


2006 ◽  
Vol 36 (2) ◽  
pp. 375-413
Author(s):  
Gary G. Venter ◽  
John A. Major ◽  
Rodney E. Kreps

The marginal approach to risk and return analysis compares the marginal return from a business decision to the marginal risk imposed. Allocation distributes the total company risk to business units and compares the profit/risk ratio of the units. These approaches coincide when the allocation actually assigns the marginal risk to each business unit, i.e., when the marginal impacts add up to the total risk measure. This is possible for one class of risk measures (scalable measures) under the assumption of homogeneous growth and by a subclass (transformed probability measures) otherwise. For homogeneous growth, the allocation of scalable measures can be accomplished by the directional derivative. The first well known additive marginal allocations were the Myers-Read method from Myers and Read (2001) and co-Tail Value at Risk, discussed in Tasche (2000). Now we see that there are many others, which allows the choice of risk measure to be based on economic meaning rather than the availability of an allocation method. We prefer the term “decomposition” to “allocation” here because of the use of the method of co-measures, which quantifies the component composition of a risk measure rather than allocating it proportionally to something.Risk adjusted profitability calculations that do not rely on capital allocation still may involve decomposition of risk measures. Such a case is discussed. Calculation issues for directional derivatives are also explored.


2020 ◽  
Vol 23 (03) ◽  
pp. 2050017
Author(s):  
YANHONG CHEN ◽  
YIJUN HU

In this paper, we study how to evaluate the risk of a financial portfolio, whose components may be dependent and come from different markets or involve more than one kind of currencies, while we also take into consideration the uncertainty about the time value of money. Namely, we introduce a new class of risk measures, named set-valued dynamic risk measures for bounded discrete-time processes that are adapted to a given filtration. The time horizon can be finite or infinite. We investigate the representation results for them by making full use of Legendre–Fenchel conjugation theory for set-valued functions. Finally, some examples such as the set-valued dynamic average value at risk and the entropic risk measure for bounded discrete-time processes are also given.


Sign in / Sign up

Export Citation Format

Share Document