scholarly journals Generalized Systematic Risk

2016 ◽  
Vol 8 (2) ◽  
pp. 86-127 ◽  
Author(s):  
Ohad Kadan ◽  
Fang Liu ◽  
Suying Liu

We generalize the concept of “systematic risk” to a broad class of risk measures potentially accounting for high distribution moments, downside risk, rare disasters, as well as other risk attributes. We offer two different approaches. First is an equilibrium framework generalizing the Capital Asset Pricing Model, two-fund separation, and the security market line. Second is an axiomatic approach resulting in a systematic risk measure as the unique solution to a risk allocation problem. Both approaches lead to similar results extending the traditional beta to capture multiple dimensions of risk. The results lend themselves naturally to empirical investigation. (JEL D81, G11, G12)

2019 ◽  
Vol 34 (2) ◽  
pp. 297-315
Author(s):  
Linxiao Wei ◽  
Yijun Hu

AbstractCapital allocation is of central importance in portfolio management and risk-based performance measurement. Capital allocations for univariate risk measures have been extensively studied in the finance literature. In contrast to this situation, few papers dealt with capital allocations for multivariate risk measures. In this paper, we propose an axiom system for capital allocation with multivariate risk measures. We first recall the class of the positively homogeneous and subadditive multivariate risk measures, and provide the corresponding representation results. Then it is shown that for a given positively homogeneous and subadditive multivariate risk measure, there exists a capital allocation principle. Furthermore, the uniqueness of the capital allocation principe is characterized. Finally, examples are also given to derive the explicit capital allocation principles for the multivariate risk measures based on mean and standard deviation, including the multivariate mean-standard-deviation risk measures.


Mathematics ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 111
Author(s):  
Hyungbin Park

This paper proposes modified mean-variance risk measures for long-term investment portfolios. Two types of portfolios are considered: constant proportion portfolios and increasing amount portfolios. They are widely used in finance for investing assets and developing derivative securities. We compare the long-term behavior of a conventional mean-variance risk measure and a modified one of the two types of portfolios, and we discuss the benefits of the modified measure. Subsequently, an optimal long-term investment strategy is derived. We show that the modified risk measure reflects the investor’s risk aversion on the optimal long-term investment strategy; however, the conventional one does not. Several factor models are discussed as concrete examples: the Black–Scholes model, Kim–Omberg model, Heston model, and 3/2 stochastic volatility model.


Author(s):  
Nicole Bäuerle ◽  
Alexander Glauner

AbstractWe study the minimization of a spectral risk measure of the total discounted cost generated by a Markov Decision Process (MDP) over a finite or infinite planning horizon. The MDP is assumed to have Borel state and action spaces and the cost function may be unbounded above. The optimization problem is split into two minimization problems using an infimum representation for spectral risk measures. We show that the inner minimization problem can be solved as an ordinary MDP on an extended state space and give sufficient conditions under which an optimal policy exists. Regarding the infinite dimensional outer minimization problem, we prove the existence of a solution and derive an algorithm for its numerical approximation. Our results include the findings in Bäuerle and Ott (Math Methods Oper Res 74(3):361–379, 2011) in the special case that the risk measure is Expected Shortfall. As an application, we present a dynamic extension of the classical static optimal reinsurance problem, where an insurance company minimizes its cost of capital.


2021 ◽  
Vol 14 (5) ◽  
pp. 201
Author(s):  
Yuan Hu ◽  
W. Brent Lindquist ◽  
Svetlozar T. Rachev

This paper investigates performance attribution measures as a basis for constraining portfolio optimization. We employ optimizations that minimize conditional value-at-risk and investigate two performance attributes, asset allocation (AA) and the selection effect (SE), as constraints on asset weights. The test portfolio consists of stocks from the Dow Jones Industrial Average index. Values for the performance attributes are established relative to two benchmarks, equi-weighted and price-weighted portfolios of the same stocks. Performance of the optimized portfolios is judged using comparisons of cumulative price and the risk-measures: maximum drawdown, Sharpe ratio, Sortino–Satchell ratio and Rachev ratio. The results suggest that achieving SE performance thresholds requires larger turnover values than that required for achieving comparable AA thresholds. The results also suggest a positive role in price and risk-measure performance for the imposition of constraints on AA and SE.


Risks ◽  
2018 ◽  
Vol 6 (3) ◽  
pp. 85 ◽  
Author(s):  
Mohamed Lkabous ◽  
Jean-François Renaud

In this short paper, we study a VaR-type risk measure introduced by Guérin and Renaud and which is based on cumulative Parisian ruin. We derive some properties of this risk measure and we compare it to the risk measures of Trufin et al. and Loisel and Trufin.


2009 ◽  
Vol 39 (2) ◽  
pp. 591-613 ◽  
Author(s):  
Andreas Kull

AbstractWe revisit the relative retention problem originally introduced by de Finetti using concepts recently developed in risk theory and quantitative risk management. Instead of using the Variance as a risk measure we consider the Expected Shortfall (Tail-Value-at-Risk) and include capital costs and take constraints on risk capital into account. Starting from a risk-based capital allocation, the paper presents an optimization scheme for sharing risk in a multi-risk class environment. Risk sharing takes place between two portfolios and the pricing of risktransfer reflects both portfolio structures. This allows us to shed more light on the question of how optimal risk sharing is characterized in a situation where risk transfer takes place between parties employing similar risk and performance measures. Recent developments in the regulatory domain (‘risk-based supervision’) pushing for common, insurance industry-wide risk measures underline the importance of this question. The paper includes a simple non-life insurance example illustrating optimal risk transfer in terms of retentions of common reinsurance structures.


2021 ◽  
Vol 14 (11) ◽  
pp. 540
Author(s):  
Eyden Samunderu ◽  
Yvonne T. Murahwa

Developments in the world of finance have led the authors to assess the adequacy of using the normal distribution assumptions alone in measuring risk. Cushioning against risk has always created a plethora of complexities and challenges; hence, this paper attempts to analyse statistical properties of various risk measures in a not normal distribution and provide a financial blueprint on how to manage risk. It is assumed that using old assumptions of normality alone in a distribution is not as accurate, which has led to the use of models that do not give accurate risk measures. Our empirical design of study firstly examined an overview of the use of returns in measuring risk and an assessment of the current financial environment. As an alternative to conventional measures, our paper employs a mosaic of risk techniques in order to ascertain the fact that there is no one universal risk measure. The next step involved looking at the current risk proxy measures adopted, such as the Gaussian-based, value at risk (VaR) measure. Furthermore, the authors analysed multiple alternative approaches that do not take into account the normality assumption, such as other variations of VaR, as well as econometric models that can be used in risk measurement and forecasting. Value at risk (VaR) is a widely used measure of financial risk, which provides a way of quantifying and managing the risk of a portfolio. Arguably, VaR represents the most important tool for evaluating market risk as one of the several threats to the global financial system. Upon carrying out an extensive literature review, a data set was applied which was composed of three main asset classes: bonds, equities and hedge funds. The first part was to determine to what extent returns are not normally distributed. After testing the hypothesis, it was found that the majority of returns are not normally distributed but instead exhibit skewness and kurtosis greater or less than three. The study then applied various VaR methods to measure risk in order to determine the most efficient ones. Different timelines were used to carry out stressed value at risks, and it was seen that during periods of crisis, the volatility of asset returns was higher. The other steps that followed examined the relationship of the variables, correlation tests and time series analysis conducted and led to the forecasting of the returns. It was noted that these methods could not be used in isolation. We adopted the use of a mosaic of all the methods from the VaR measures, which included studying the behaviour and relation of assets with each other. Furthermore, we also examined the environment as a whole, then applied forecasting models to accurately value returns; this gave a much more accurate and relevant risk measure as compared to the initial assumption of normality.


2016 ◽  
Vol 17 (3) ◽  
pp. 99
Author(s):  
A An Arief Jusuf

<em>Beta has been argued, both conceptually as well as empirically. In 1960's, many practitioners used superior advantages in calculation attempted at CAPM theory for investing in asset which has high Beta. Many empirical researches on the later years refused the existence of security market line from CAPM. Afterwards, many practitioners and academicians stated the death of CAPM. Linear regression method could be used to make decision if it had already matched the criteria for Best Linear Unbiased Estimator. Prediction model is a statistic testing which aimsat knowing whether there is a relationship or effect between researched variables. Nonparametric method is an alternative action which is taken when the research model does not match normality assumption. This research, as shown by the use of weekly data, could be free from technical trading problems in predicted systematic risk. While ASII, HRUM, and TLKM stock returns are affected more by other factors. This condition has caused systematic risk not to affect significantly on those stocks. Another result has shown that banking stocks, which became part of LQ45, have higher systematic risk respectively.</em>


2015 ◽  
Author(s):  
Γεώργιος Παπαγιάννης

The main aim of the present thesis is to investigate the effect of diverging priors concerning model uncertainty on decision making. One of the main issues in the thesis is to assess the effect of different notions of distance in the space of probability measures and their use as loss functionals in the process of identifying the best suited model among a set of plausible priors. Another issue, is that of addressing the problem of ``inhomogeneous" sets of priors, i.e. sets of priors that highly divergent opinions may occur, and the need to robustly treat that case. As high degrees of inhomogeneity may lead to distrust of the decision maker to the priors it may be desirable to adopt a particular prior corresponding to the set which somehow minimizes the ``variability" among the models on the set. This leads to the notion of Frechet risk measure. Finally, an important problem is the actual calculation of robust risk measures. An account of their variational definition, the problem of calculation leads to the numerical treatment of problems of the calculus of variations for which reliable and effective algorithms are proposed. The contributions of the thesis are presented in the following three chapters. In Chapter 2, a statistical learning scheme is introduced for constructing the best model compatible with a set of priors provided by different information sources of varying reliability. As various priors may model well different aspects of the phenomenon the proposed scheme is a variational scheme based on the minimization of a weighted loss function in the space of probability measures which in certain cases is shown to be equivalent to weighted quantile averaging schemes. Therefore in contrast to approaches such as minimax decision theory in which a particular element of the prior set is chosen we construct for each prior set a probability measure which is not necessarily an element of it, a fact that as shown may lead to better description of the phenomenon in question. While treating this problem we also address the issue of the effect of the choice of distance functional in the space of measures on the problem of model selection. One of the key findings in this respect is that the class of Wasserstein distances seems to have the best performance as compared to other distances such as the KL-divergence. In Chapter 3, motivated by the results of Chapter 2, we treat the problem of specifying the risk measure for a particular loss when a set of highly divergent priors concerning the distribution of the loss is available. Starting from the principle that the ``variability" of opinions is not welcome, a fact for which a strong axiomatic framework is provided (see e.g. Klibanoff (2005) and references therein) we introduce the concept of Frechet risk measures, which corresponds to a minimal variance risk measure. Here we view a set of priors as a discrete measure on the space of probability measures and by variance we mean the variance of this discrete probability measure. This requires the use of the concept of Frechet mean. By different metrizations of the space of probability measures we define a variety of Frechet risk measures, the Wasserstein, the Hellinger and the weighted entropic risk measure, and illustrate their use and performance via an example related to the static hedging of derivatives under model uncertainty. In Chapter 4, we consider the problem of numerical calculation of convex risk measures applying techniques from the calculus of variations. Regularization schemes are proposed and the theoretical convergence of the algorithms is considered.


2009 ◽  
Vol 84 (6) ◽  
pp. 1983-2011 ◽  
Author(s):  
Alexander Nekrasov ◽  
Pervin K. Shroff

ABSTRACT: We propose a methodology to incorporate risk measures based on economic fundamentals directly into the valuation model. Fundamentals-based risk adjustment in the residual income valuation model is captured by the covariance of ROE with market-wide factors. We demonstrate a method of estimating covariance risk out of sample based on the accounting beta and betas of size and book-to-market factors in earnings. We show how the covariance risk estimate can be transformed to obtain the fundamentals-based cost of equity. Our empirical analysis shows that value estimates based on fundamental risk adjustment produce significantly smaller deviations from price relative to the CAPM or the Fama-French three-factor model. We further find that our single-factor risk measure, based on the accounting beta alone, captures aspects of risk that are indicated by the book-to-market factor, largely accounting for the “mispricing” of value and growth stocks. Our study highlights the usefulness of accounting numbers in pricing risk beyond their role as trackers of returns-based measures of risk.


Sign in / Sign up

Export Citation Format

Share Document