WEIGHTED COMONOTONIC RISK SHARING UNDER HETEROGENEOUS BELIEFS

2020 ◽  
Vol 50 (2) ◽  
pp. 647-673
Author(s):  
Haiyan Liu

AbstractWe study a weighted comonotonic risk-sharing problem among multiple agents with distortion risk measures under heterogeneous beliefs. The explicit forms of optimal allocations are obtained, which are Pareto-optimal. A necessary and sufficient condition is given to ensure the uniqueness of the optimal allocation, and sufficient conditions are given to obtain an optimal allocation of the form of excess of loss or full insurance. The optimal allocation may satisfy individual rationality depending on the choice of the weight. When the distortion risk measure is value at risk or tail value at risk, an optimal allocation is generally of the excess-of-loss form. The numerical examples suggest that a risk is more likely to be shared among agents with heterogeneous beliefs, and the introduction of the weight enables us to prioritize some agents as part of a group sharing a risk.

2009 ◽  
Vol 39 (2) ◽  
pp. 591-613 ◽  
Author(s):  
Andreas Kull

AbstractWe revisit the relative retention problem originally introduced by de Finetti using concepts recently developed in risk theory and quantitative risk management. Instead of using the Variance as a risk measure we consider the Expected Shortfall (Tail-Value-at-Risk) and include capital costs and take constraints on risk capital into account. Starting from a risk-based capital allocation, the paper presents an optimization scheme for sharing risk in a multi-risk class environment. Risk sharing takes place between two portfolios and the pricing of risktransfer reflects both portfolio structures. This allows us to shed more light on the question of how optimal risk sharing is characterized in a situation where risk transfer takes place between parties employing similar risk and performance measures. Recent developments in the regulatory domain (‘risk-based supervision’) pushing for common, insurance industry-wide risk measures underline the importance of this question. The paper includes a simple non-life insurance example illustrating optimal risk transfer in terms of retentions of common reinsurance structures.


2021 ◽  
Vol 14 (11) ◽  
pp. 540
Author(s):  
Eyden Samunderu ◽  
Yvonne T. Murahwa

Developments in the world of finance have led the authors to assess the adequacy of using the normal distribution assumptions alone in measuring risk. Cushioning against risk has always created a plethora of complexities and challenges; hence, this paper attempts to analyse statistical properties of various risk measures in a not normal distribution and provide a financial blueprint on how to manage risk. It is assumed that using old assumptions of normality alone in a distribution is not as accurate, which has led to the use of models that do not give accurate risk measures. Our empirical design of study firstly examined an overview of the use of returns in measuring risk and an assessment of the current financial environment. As an alternative to conventional measures, our paper employs a mosaic of risk techniques in order to ascertain the fact that there is no one universal risk measure. The next step involved looking at the current risk proxy measures adopted, such as the Gaussian-based, value at risk (VaR) measure. Furthermore, the authors analysed multiple alternative approaches that do not take into account the normality assumption, such as other variations of VaR, as well as econometric models that can be used in risk measurement and forecasting. Value at risk (VaR) is a widely used measure of financial risk, which provides a way of quantifying and managing the risk of a portfolio. Arguably, VaR represents the most important tool for evaluating market risk as one of the several threats to the global financial system. Upon carrying out an extensive literature review, a data set was applied which was composed of three main asset classes: bonds, equities and hedge funds. The first part was to determine to what extent returns are not normally distributed. After testing the hypothesis, it was found that the majority of returns are not normally distributed but instead exhibit skewness and kurtosis greater or less than three. The study then applied various VaR methods to measure risk in order to determine the most efficient ones. Different timelines were used to carry out stressed value at risks, and it was seen that during periods of crisis, the volatility of asset returns was higher. The other steps that followed examined the relationship of the variables, correlation tests and time series analysis conducted and led to the forecasting of the returns. It was noted that these methods could not be used in isolation. We adopted the use of a mosaic of all the methods from the VaR measures, which included studying the behaviour and relation of assets with each other. Furthermore, we also examined the environment as a whole, then applied forecasting models to accurately value returns; this gave a much more accurate and relevant risk measure as compared to the initial assumption of normality.


2018 ◽  
Vol 21 (03) ◽  
pp. 1850010 ◽  
Author(s):  
LAKSHITHE WAGALATH ◽  
JORGE P. ZUBELLI

This paper proposes an intuitive and flexible framework to quantify liquidation risk for financial institutions. We develop a model where the “fundamental” dynamics of assets is modified by price impacts from fund liquidations. We characterize mathematically the liquidation schedule of financial institutions and study in detail the fire sales resulting endogenously from margin constraints when a financial institution trades through an exchange. Our study enables to obtain tractable formulas for the value at risk and expected shortfall of a financial institution in the presence of fund liquidation. In particular, we find an additive decomposition for liquidation-adjusted risk measures. We show that such a measure can be expressed as a “fundamental” risk measure plus a liquidation risk adjustment that is proportional to the size of fund positions as a fraction of asset market depths. Our results can be used by risk managers in financial institutions to tackle liquidity events arising from fund liquidations better and adjust their portfolio allocations to liquidation risk more accurately.


2021 ◽  
Vol 17 (3) ◽  
pp. 370-380
Author(s):  
Ervin Indarwati ◽  
Rosita Kusumawati

Portfolio risk shows the large deviations in portfolio returns from expected portfolio returns. Value at Risk (VaR) is one method for determining the maximum risk of loss of a portfolio or an asset based on a certain probability and time. There are three methods to estimate VaR, namely variance-covariance, historical, and Monte Carlo simulations. One disadvantage of VaR is that it is incoherent because it does not have sub-additive properties. Conditional Value at Risk (CVaR) is a coherent or related risk measure and has a sub-additive nature which indicates that the loss on the portfolio is smaller or equal to the amount of loss of each asset. CVaR can provide loss information above the maximum loss. Estimating portfolio risk from the CVaR value using Monte Carlo simulation and its application to PT. Bank Negara Indonesia (Persero) Tbk (BBNI.JK) and PT. Bank Tabungan Negara (Persero) Tbk (BBTN.JK) will be discussed in this study.  The  daily  closing  price  of  each  BBNI  and BBTN share from 6 January 2019 to 30 December 2019 is used to measure the CVaR of the two banks' stock portfolios with this Monte Carlo simulation. The steps taken are determining the return value of assets, testing the normality of return of assets, looking for risk measures of returning assets that form a normally distributed portfolio, simulate the return of assets with monte carlo, calculate portfolio weights, looking for returns portfolio, calculate the quartile of portfolio return as a VaR value, and calculate the average loss above the VaR value as a CVaR value. The results of portfolio risk estimation of the value of CVaR using Monte Carlo simulation on PT. Bank Negara Indonesia (Persero) Tbk and PT. Bank Tabungan Negara (Persero) Tbk at a confidence level of 90%, 95%, and 99% is 5.82%, 6.39%, and 7.1% with a standard error of 0.58%, 0.59%, and 0.59%. If the initial funds that will be invested in this portfolio are illustrated at Rp 100,000,000, it can be interpreted that the maximum possible risk that investors will receive in the future will not exceed Rp 5,820,000, Rp 6,390,000 and Rp 7,100,000 at the significant level 90%, 95%, and 99%


Author(s):  
Khreshna Syuhada

In financial and insurance industries, risks may come from several sources. It is therefore important to predict future risk by using the concept of aggregate risk. Risk measure prediction plays important role in allocating capital as well as in controlling (and avoiding) worse risk. In this paper, we consider several risk measures such as Value-at-Risk (VaR), Tail VaR (TVaR) and its extension namely Adjusted TVaR (Adj-TVaR). Specifically, we perform an upper bound for such risk measure applied for aggregate risk models. The concept and property of comonotonicity and convex order are utilized to obtain such upper bound.Keywords:        Coherent property, comonotonic rv, convex order, tail property, Value-at-Risk (VaR).


2018 ◽  
Vol 15 (4) ◽  
pp. 17-34 ◽  
Author(s):  
Tom Burdorf ◽  
Gary van Vuuren

As a risk measure, Value at Risk (VaR) is neither sub-additive nor coherent. These drawbacks have coerced regulatory authorities to introduce and mandate Expected Shortfall (ES) as a mainstream regulatory risk management metric. VaR is, however, still needed to estimate the tail conditional expectation (the ES): the average of losses that are greater than the VaR at a significance level These two risk measures behave quite differently during growth and recession periods in developed and emerging economies. Using equity portfolios assembled from securities of the banking and retail sectors in the UK and South Africa, historical, variance-covariance and Monte Carlo approaches are used to determine VaR (and hence ES). The results are back-tested and compared, and normality assumptions are tested. Key findings are that the results of the variance covariance and the Monte Carlo approach are more consistent in all environments in comparison to the historical outcomes regardless of the equity portfolio regarded. The industries and periods analysed influenced the accuracy of the risk measures; the different economies did not.


2020 ◽  
Author(s):  
Denisa Banulescu-Radu ◽  
Christophe Hurlin ◽  
Jérémy Leymarie ◽  
Olivier Scaillet

This paper proposes an original approach for backtesting systemic risk measures. This backtesting approach makes it possible to assess the systemic risk measure forecasts used to identify the financial institutions that contribute the most to the overall risk in the financial system. Our procedure is based on simple tests similar to those generally used to backtest the standard market risk measures such as value-at-risk or expected shortfall. We introduce a concept of violation associated with the marginal expected shortfall (MES), and we define unconditional coverage and independence tests for these violations. We can generalize these tests to any MES-based systemic risk measures such as the systemic expected shortfall (SES), the systemic risk measure (SRISK), or the delta conditional value-at-risk ([Formula: see text]CoVaR). We study their asymptotic properties in the presence of estimation risk and investigate their finite sample performance via Monte Carlo simulations. An empirical application to a panel of U.S. financial institutions is conducted to assess the validity of MES, SRISK, and [Formula: see text]CoVaR forecasts issued from a bivariate GARCH model with a dynamic conditional correlation structure. Our results show that this model provides valid forecasts for MES and SRISK when considering a medium-term horizon. Finally, we propose an early warning system indicator for future systemic crises deduced from these backtests. Our indicator quantifies how much is the measurement error issued by a systemic risk forecast at a given point in time which can serve for the early detection of global market reversals. This paper was accepted by Kay Giesecke, finance.


2017 ◽  
Vol 12 (2) ◽  
pp. 433-454 ◽  
Author(s):  
Michel Dacorogna ◽  
Laila Elbahtouri ◽  
Marie Kratz

AbstractValidation of risk models is required by regulators and demanded by management and shareholders. Those models rely in practice heavily on Monte Carlo (MC) simulations. Given their complexity, the convergence of the MC algorithm is difficult to prove mathematically. To circumvent this problem and nevertheless explore the conditions of convergence, we suggest an analytical approach. Considering standard models, we compute, via mixing techniques, closed form formulas for risk measures as Value-at-Risk (VaR) VaR or Tail Value-at-Risk (TVaR) TVaR on a portfolio of risks, and consequently for the associated diversification benefit. The numerical convergence of MC simulations of those various quantities is then tested against their analytical evaluations. The speed of convergence appears to depend on the fatness of the tail of the marginal distributions; the higher the tail index, the faster the convergence. We also explore the behaviour of the diversification benefit with various dependence structures and marginals (heavy and light tails). As expected, it varies heavily with the type of dependence between aggregated risks. The diversification benefit is also studied as a function of the risk measure, VaR or TVaR.


Risks ◽  
2019 ◽  
Vol 7 (2) ◽  
pp. 52 ◽  
Author(s):  
Erwan Koch

An accurate assessment of the risk of extreme environmental events is of great importance for populations, authorities and the banking/insurance/reinsurance industry. Koch (2017) introduced a notion of spatial risk measure and a corresponding set of axioms which are well suited to analyze the risk due to events having a spatial extent, precisely such as environmental phenomena. The axiom of asymptotic spatial homogeneity is of particular interest since it allows one to quantify the rate of spatial diversification when the region under consideration becomes large. In this paper, we first investigate the general concepts of spatial risk measures and corresponding axioms further and thoroughly explain the usefulness of this theory for both actuarial science and practice. Second, in the case of a general cost field, we give sufficient conditions such that spatial risk measures associated with expectation, variance, value-at-risk as well as expected shortfall and induced by this cost field satisfy the axioms of asymptotic spatial homogeneity of order 0, −2, −1 and −1, respectively. Last but not least, in the case where the cost field is a function of a max-stable random field, we provide conditions on both the function and the max-stable field ensuring the latter properties. Max-stable random fields are relevant when assessing the risk of extreme events since they appear as a natural extension of multivariate extreme-value theory to the level of random fields. Overall, this paper improves our understanding of spatial risk measures as well as of their properties with respect to the space variable and generalizes many results obtained in Koch (2017).


2017 ◽  
Vol 2017 ◽  
pp. 1-7 ◽  
Author(s):  
Yu Feng ◽  
Yichuan Dong ◽  
Jia-Bao Liu

We propose a new set-valued risk measure, which is called set-valued Haezendonck-Goovaerts risk measure. First, we construct the set-valued Haezendonck-Goovaerts risk measure and then provide an equivalent representation. The properties of the set-valued Haezendonck-Goovaerts risk measure are investigated, which show that the set-valued Haezendonck-Goovaerts risk measure is coherent. Finally, an example of set-valued Haezendonck-Goovaerts risk measure is given, which exhibits the fact that the set-valued average value at risk is a particular case of the set-valued Haezendonck-Goovaerts risk measures.


Sign in / Sign up

Export Citation Format

Share Document