Estimating the Variance of Bootstrapped Risk Measures

2009 ◽  
Vol 39 (1) ◽  
pp. 199-223 ◽  
Author(s):  
Joseph H.T. Kim ◽  
Mary R. Hardy

AbstractIn Kim and Hardy (2007) the exact bootstrap was used to estimate certain risk measures including Value at Risk and the Conditional Tail Expectation. In this paper we continue this work by deriving the influence function of the exact-bootstrapped quantile risk measure. We can use the influence function to estimate the variance of the exact-bootstrap risk measure. We then extend the result to the L-estimator class, which includes the conditional tail expectation risk measure. The resulting formula provides an alternative way to estimate the variance of the bootstrapped risk measures, or the whole L-estimator class in an analytic form. A simulation study shows that this new method is comparable to the ordinary resampling-based bootstrap method, with the advantages of an analytic approach.

2021 ◽  
Vol 14 (5) ◽  
pp. 201
Author(s):  
Yuan Hu ◽  
W. Brent Lindquist ◽  
Svetlozar T. Rachev

This paper investigates performance attribution measures as a basis for constraining portfolio optimization. We employ optimizations that minimize conditional value-at-risk and investigate two performance attributes, asset allocation (AA) and the selection effect (SE), as constraints on asset weights. The test portfolio consists of stocks from the Dow Jones Industrial Average index. Values for the performance attributes are established relative to two benchmarks, equi-weighted and price-weighted portfolios of the same stocks. Performance of the optimized portfolios is judged using comparisons of cumulative price and the risk-measures: maximum drawdown, Sharpe ratio, Sortino–Satchell ratio and Rachev ratio. The results suggest that achieving SE performance thresholds requires larger turnover values than that required for achieving comparable AA thresholds. The results also suggest a positive role in price and risk-measure performance for the imposition of constraints on AA and SE.


2009 ◽  
Vol 39 (2) ◽  
pp. 591-613 ◽  
Author(s):  
Andreas Kull

AbstractWe revisit the relative retention problem originally introduced by de Finetti using concepts recently developed in risk theory and quantitative risk management. Instead of using the Variance as a risk measure we consider the Expected Shortfall (Tail-Value-at-Risk) and include capital costs and take constraints on risk capital into account. Starting from a risk-based capital allocation, the paper presents an optimization scheme for sharing risk in a multi-risk class environment. Risk sharing takes place between two portfolios and the pricing of risktransfer reflects both portfolio structures. This allows us to shed more light on the question of how optimal risk sharing is characterized in a situation where risk transfer takes place between parties employing similar risk and performance measures. Recent developments in the regulatory domain (‘risk-based supervision’) pushing for common, insurance industry-wide risk measures underline the importance of this question. The paper includes a simple non-life insurance example illustrating optimal risk transfer in terms of retentions of common reinsurance structures.


2021 ◽  
Vol 14 (11) ◽  
pp. 540
Author(s):  
Eyden Samunderu ◽  
Yvonne T. Murahwa

Developments in the world of finance have led the authors to assess the adequacy of using the normal distribution assumptions alone in measuring risk. Cushioning against risk has always created a plethora of complexities and challenges; hence, this paper attempts to analyse statistical properties of various risk measures in a not normal distribution and provide a financial blueprint on how to manage risk. It is assumed that using old assumptions of normality alone in a distribution is not as accurate, which has led to the use of models that do not give accurate risk measures. Our empirical design of study firstly examined an overview of the use of returns in measuring risk and an assessment of the current financial environment. As an alternative to conventional measures, our paper employs a mosaic of risk techniques in order to ascertain the fact that there is no one universal risk measure. The next step involved looking at the current risk proxy measures adopted, such as the Gaussian-based, value at risk (VaR) measure. Furthermore, the authors analysed multiple alternative approaches that do not take into account the normality assumption, such as other variations of VaR, as well as econometric models that can be used in risk measurement and forecasting. Value at risk (VaR) is a widely used measure of financial risk, which provides a way of quantifying and managing the risk of a portfolio. Arguably, VaR represents the most important tool for evaluating market risk as one of the several threats to the global financial system. Upon carrying out an extensive literature review, a data set was applied which was composed of three main asset classes: bonds, equities and hedge funds. The first part was to determine to what extent returns are not normally distributed. After testing the hypothesis, it was found that the majority of returns are not normally distributed but instead exhibit skewness and kurtosis greater or less than three. The study then applied various VaR methods to measure risk in order to determine the most efficient ones. Different timelines were used to carry out stressed value at risks, and it was seen that during periods of crisis, the volatility of asset returns was higher. The other steps that followed examined the relationship of the variables, correlation tests and time series analysis conducted and led to the forecasting of the returns. It was noted that these methods could not be used in isolation. We adopted the use of a mosaic of all the methods from the VaR measures, which included studying the behaviour and relation of assets with each other. Furthermore, we also examined the environment as a whole, then applied forecasting models to accurately value returns; this gave a much more accurate and relevant risk measure as compared to the initial assumption of normality.


2012 ◽  
Vol 3 (1) ◽  
pp. 150-157 ◽  
Author(s):  
Suresh Andrew Sethi ◽  
Mike Dalton

Abstract Traditional measures that quantify variation in natural resource systems include both upside and downside deviations as contributing to variability, such as standard deviation or the coefficient of variation. Here we introduce three risk measures from investment theory, which quantify variability in natural resource systems by analyzing either upside or downside outcomes and typical or extreme outcomes separately: semideviation, conditional value-at-risk, and probability of ruin. Risk measures can be custom tailored to frame variability as a performance measure in terms directly meaningful to specific management objectives, such as presenting risk as harvest expected in an extreme bad year, or by characterizing risk as the probability of fishery escapement falling below a prescribed threshold. In this paper, we present formulae, empirical examples from commercial fisheries, and R code to calculate three risk measures. In addition, we evaluated risk measure performance with simulated data, and we found that risk measures can provide unbiased estimates at small sample sizes. By decomposing complex variability into quantitative metrics, we envision risk measures to be useful across a range of wildlife management scenarios, including policy decision analyses, comparative analyses across systems, and tracking the state of natural resource systems through time.


2019 ◽  
Vol 12 (4) ◽  
pp. 159 ◽  
Author(s):  
Yuyang Cheng ◽  
Marcos Escobar-Anel ◽  
Zhenxian Gong

This paper proposes and investigates a multivariate 4/2 Factor Model. The name 4/2 comes from the superposition of a CIR term and a 3/2-model component. Our model goes multidimensional along the lines of a principal component and factor covariance decomposition. We find conditions for well-defined changes of measure and we also find two key characteristic functions in closed-form, which help with pricing and risk measure calculations. In a numerical example, we demonstrate the significant impact of the newly added 3/2 component (parameter b) and the common factor (a), both with respect to changes on the implied volatility surface (up to 100%) and on two risk measures: value at risk and expected shortfall where an increase of up to 29% was detected.


2006 ◽  
Vol 36 (2) ◽  
pp. 375-413
Author(s):  
Gary G. Venter ◽  
John A. Major ◽  
Rodney E. Kreps

The marginal approach to risk and return analysis compares the marginal return from a business decision to the marginal risk imposed. Allocation distributes the total company risk to business units and compares the profit/risk ratio of the units. These approaches coincide when the allocation actually assigns the marginal risk to each business unit, i.e., when the marginal impacts add up to the total risk measure. This is possible for one class of risk measures (scalable measures) under the assumption of homogeneous growth and by a subclass (transformed probability measures) otherwise. For homogeneous growth, the allocation of scalable measures can be accomplished by the directional derivative. The first well known additive marginal allocations were the Myers-Read method from Myers and Read (2001) and co-Tail Value at Risk, discussed in Tasche (2000). Now we see that there are many others, which allows the choice of risk measure to be based on economic meaning rather than the availability of an allocation method. We prefer the term “decomposition” to “allocation” here because of the use of the method of co-measures, which quantifies the component composition of a risk measure rather than allocating it proportionally to something.Risk adjusted profitability calculations that do not rely on capital allocation still may involve decomposition of risk measures. Such a case is discussed. Calculation issues for directional derivatives are also explored.


2020 ◽  
Vol 23 (03) ◽  
pp. 2050017
Author(s):  
YANHONG CHEN ◽  
YIJUN HU

In this paper, we study how to evaluate the risk of a financial portfolio, whose components may be dependent and come from different markets or involve more than one kind of currencies, while we also take into consideration the uncertainty about the time value of money. Namely, we introduce a new class of risk measures, named set-valued dynamic risk measures for bounded discrete-time processes that are adapted to a given filtration. The time horizon can be finite or infinite. We investigate the representation results for them by making full use of Legendre–Fenchel conjugation theory for set-valued functions. Finally, some examples such as the set-valued dynamic average value at risk and the entropic risk measure for bounded discrete-time processes are also given.


2014 ◽  
Vol 44 (3) ◽  
pp. 613-633 ◽  
Author(s):  
Werner Hürlimann

AbstractWe consider the multivariate Value-at-Risk (VaR) and Conditional-Tail-Expectation (CTE) risk measures introduced in Cousin and Di Bernardino (Cousin, A. and Di Bernardino, E. (2013) Journal of Multivariate Analysis, 119, 32–46; Cousin, A. and Di Bernardino, E. (2014) Insurance: Mathematics and Economics, 55(C), 272–282). For absolutely continuous Archimedean copulas, we derive integral formulas for the multivariate VaR and CTE Archimedean risk measures. We show that each component of the multivariate VaR and CTE functional vectors is an integral transform of the corresponding univariate VaR measures. For the class of Archimedean copulas, the marginal components of the CTE vector satisfy the following properties: positive homogeneity (PH), translation invariance (TI), monotonicity (MO), safety loading (SL) and VaR inequality (VIA). In case marginal risks satisfy the subadditivity (MSA) property, the marginal CTE components are also sub-additive and hitherto coherent risk measures in the usual sense. Moreover, the increasing risk (IR) or stop-loss order preserving property of the marginal CTE components holds for the class of bivariate Archimedean copulas. A counterexample to the (IR) property for the trivariate Clayton copula is included.


2020 ◽  
Vol 21 (5) ◽  
pp. 543-557
Author(s):  
Modisane Bennett Seitshiro ◽  
Hopolang Phillip Mashele

Purpose The purpose of this paper is to propose the parametric bootstrap method for valuation of over-the-counter derivative (OTCD) initial margin (IM) in the financial market with low outstanding notional amounts. That is, an aggregate outstanding gross notional amount of OTC derivative instruments not exceeding R20bn. Design/methodology/approach The OTCD market is assumed to have a Gaussian probability distribution with the mean and standard deviation parameters. The bootstrap value at risk model is applied as a risk measure that generates bootstrap initial margins (BIM). Findings The proposed parametric bootstrap method is in favour of the BIM amounts for the simulated and real data sets. These BIM amounts are reasonably exceeding the IM amounts whenever the significance level increases. Research limitations/implications This paper only assumed that the OTCD returns only come from a normal probability distribution. Practical implications The OTCD IM requirement in respect to transactions done by counterparties may affect the entire financial market participants under uncleared OTCD, while reducing systemic risk. Thus, reducing spillover effects by ensuring that collateral (IM) is available to offset losses caused by the default of a OTCDs counterparty. Originality/value This paper contributes to the literature by presenting a valuation of IM for the financial market with low outstanding notional amounts by using the parametric bootstrap method.


2016 ◽  
Vol 4 (1) ◽  
Author(s):  
Silvana M. Pesenti ◽  
Pietro Millossovich ◽  
Andreas Tsanakas

AbstractOne of risk measures’ key purposes is to consistently rank and distinguish between different risk profiles. From a practical perspective, a risk measure should also be robust, that is, insensitive to small perturbations in input assumptions. It is known in the literature [14, 39], that strong assumptions on the risk measure’s ability to distinguish between risks may lead to a lack of robustness. We address the trade-off between robustness and consistent risk ranking by specifying the regions in the space of distribution functions, where law-invariant convex risk measures are indeed robust. Examples include the set of random variables with bounded second moment and those that are less volatile (in convex order) than random variables in a given uniformly integrable set. Typically, a risk measure is evaluated on the output of an aggregation function defined on a set of random input vectors. Extending the definition of robustness to this setting, we find that law-invariant convex risk measures are robust for any aggregation function that satisfies a linear growth condition in the tail, provided that the set of possible marginals is uniformly integrable. Thus, we obtain that all law-invariant convex risk measures possess the aggregation-robustness property introduced by [26] and further studied by [40]. This is in contrast to the widely-used, non-convex, risk measure Value-at-Risk, whose robustness in a risk aggregation context requires restricting the possible dependence structures of the input vectors.


Sign in / Sign up

Export Citation Format

Share Document