scholarly journals Portfolio optimization with optimal expected utility risk measures

Author(s):  
S. Geissel ◽  
H. Graf ◽  
J. Herbinger ◽  
F. T. Seifried

AbstractThe purpose of this article is to evaluate optimal expected utility risk measures (OEU) in a risk-constrained portfolio optimization context where the expected portfolio return is maximized. We compare the portfolio optimization with OEU constraint to a portfolio selection model using value at risk as constraint. The former is a coherent risk measure for utility functions with constant relative risk aversion and allows individual specifications to the investor’s risk attitude and time preference. In a case study with three indices, we investigate how these theoretical differences influence the performance of the portfolio selection strategies. A copula approach with univariate ARMA-GARCH models is used in a rolling forecast to simulate monthly future returns and calculate the derived measures for the optimization. The results of this study illustrate that both optimization strategies perform considerably better than an equally weighted portfolio and a buy and hold portfolio. Moreover, our results illustrate that portfolio optimization with OEU constraint experiences individualized effects, e.g., less risk-averse investors lose more portfolio value in the financial crises but outperform their more risk-averse counterparts in bull markets.

2021 ◽  
Vol 14 (5) ◽  
pp. 201
Author(s):  
Yuan Hu ◽  
W. Brent Lindquist ◽  
Svetlozar T. Rachev

This paper investigates performance attribution measures as a basis for constraining portfolio optimization. We employ optimizations that minimize conditional value-at-risk and investigate two performance attributes, asset allocation (AA) and the selection effect (SE), as constraints on asset weights. The test portfolio consists of stocks from the Dow Jones Industrial Average index. Values for the performance attributes are established relative to two benchmarks, equi-weighted and price-weighted portfolios of the same stocks. Performance of the optimized portfolios is judged using comparisons of cumulative price and the risk-measures: maximum drawdown, Sharpe ratio, Sortino–Satchell ratio and Rachev ratio. The results suggest that achieving SE performance thresholds requires larger turnover values than that required for achieving comparable AA thresholds. The results also suggest a positive role in price and risk-measure performance for the imposition of constraints on AA and SE.


2010 ◽  
Vol 13 (03) ◽  
pp. 425-437 ◽  
Author(s):  
IMRE KONDOR ◽  
ISTVÁN VARGA-HASZONITS

It is shown that the axioms for coherent risk measures imply that whenever there is a pair of portfolios such that one of them dominates the other in a given sample (which happens with finite probability even for large samples), then there is no optimal portfolio under any coherent measure on that sample, and the risk measure diverges to minus infinity. This instability was first discovered in the special example of Expected Shortfall which is used here both as an illustration and as a springboard for generalization.


Author(s):  
Jamie Fairbrother ◽  
Amanda Turner ◽  
Stein W. Wallace

AbstractScenario generation is the construction of a discrete random vector to represent parameters of uncertain values in a stochastic program. Most approaches to scenario generation are distribution-driven, that is, they attempt to construct a random vector which captures well in a probabilistic sense the uncertainty. On the other hand, a problem-driven approach may be able to exploit the structure of a problem to provide a more concise representation of the uncertainty. In this paper we propose an analytic approach to problem-driven scenario generation. This approach applies to stochastic programs where a tail risk measure, such as conditional value-at-risk, is applied to a loss function. Since tail risk measures only depend on the upper tail of a distribution, standard methods of scenario generation, which typically spread their scenarios evenly across the support of the random vector, struggle to adequately represent tail risk. Our scenario generation approach works by targeting the construction of scenarios in areas of the distribution corresponding to the tails of the loss distributions. We provide conditions under which our approach is consistent with sampling, and as proof-of-concept demonstrate how our approach could be applied to two classes of problem, namely network design and portfolio selection. Numerical tests on the portfolio selection problem demonstrate that our approach yields better and more stable solutions compared to standard Monte Carlo sampling.


2016 ◽  
Vol 19 (05) ◽  
pp. 1650035 ◽  
Author(s):  
FABIO CACCIOLI ◽  
IMRE KONDOR ◽  
MATTEO MARSILI ◽  
SUSANNE STILL

We show that including a term which accounts for finite liquidity in portfolio optimization naturally mitigates the instabilities that arise in the estimation of coherent risk measures on finite samples. This is because taking into account the impact of trading in the market is mathematically equivalent to introducing a regularization on the risk measure. We show here that the impact function determines which regularizer is to be used. We also show that any regularizer based on the norm [Formula: see text] with [Formula: see text] makes the sensitivity of coherent risk measures to estimation error disappear, while regularizers with [Formula: see text] do not. The [Formula: see text] norm represents a border case: its “soft” implementation does not remove the instability, but rather shifts its locus, whereas its “hard” implementation (including hard limits or a ban on short selling) eliminates it. We demonstrate these effects on the important special case of expected shortfall (ES) which has recently become the global regulatory market risk measure.


Author(s):  
Kei Nakagawa ◽  
Shuhei Noma ◽  
Masaya Abe

The problem of finding the optimal portfolio for investors is called the portfolio optimization problem. Such problem mainly concerns the expectation and variability of return (i.e., mean and variance). Although the variance would be the most fundamental risk measure to be minimized, it has several drawbacks. Conditional Value-at-Risk (CVaR) is a relatively new risk measure that addresses some of the shortcomings of well-known variance-related risk measures, and because of its computational efficiencies, it has gained popularity. CVaR is defined as the expected value of the loss that occurs beyond a certain probability level (β). However, portfolio optimization problems that use CVaR as a risk measure are formulated with a single β and may output significantly different portfolios depending on how the β is selected. We confirm even small changes in β can result in huge changes in the whole portfolio structure. In order to improve this problem, we propose RM-CVaR: Regularized Multiple β-CVaR Portfolio. We perform experiments on well-known benchmarks to evaluate the proposed portfolio. Compared with various portfolios, RM-CVaR demonstrates a superior performance of having both higher risk-adjusted returns and lower maximum drawdown.


2016 ◽  
Vol 33 (1-2) ◽  
Author(s):  
Edgars Jakobsons

AbstractThe statistical functional expectile has recently attracted the attention of researchers in the area of risk management, because it is the only risk measure that is both coherent and elicitable. In this article, we consider the portfolio optimization problem with an expectile objective. Portfolio optimization problems corresponding to other risk measures are often solved by formulating a linear program (LP) that is based on a sample of asset returns. We derive three different LP formulations for the portfolio expectile optimization problem, which can be considered as counterparts to the LP formulations for the Conditional Value-at-Risk (CVaR) objective in the works of Rockafellar and Uryasev [


Author(s):  
RENATO PELESSONI ◽  
PAOLO VICIG

In this paper the theory of coherent imprecise previsions is applied to risk measurement. We introduce the notion of coherent risk measure defined on an arbitrary set of risks, showing that it can be considered a special case of coherent upper prevision. We also prove that our definition generalizes the notion of coherence for risk measures defined on a linear space of random numbers, given in literature. Consistency properties of Value-at-Risk (VaR), currently one of the most used risk measures, are investigated too, showing that it does not necessarily satisfy a weaker notion of consistency called 'avoiding sure loss'. We introduce sufficient conditions for VaR to avoid sure loss and to be coherent. Finally we discuss ways of modifying incoherent risk measures into coherent ones.


Filomat ◽  
2018 ◽  
Vol 32 (3) ◽  
pp. 991-1001
Author(s):  
Shokoofeh Banihashemi ◽  
Ali Azarpour ◽  
Marziye Kaveh

This paper is a novel work of portfolio-selection problem solving using multi objective model considering four parameters, Expected return, downside beta coefficient, semivariance and conditional value at risk at a specified confidence level. Multi-period models can be defined as stochastic models. Early studies on portfolio selection developed using variance as a risk measure; although, theories and practices revealed that variance, considering its downsides, is not a desirable risk measure. To increase accuracy and overcoming negative aspects of variance, downside risk measures like semivarinace, downside beta covariance, value at risk and conditional value at risk was other risk measures that replaced in models. These risk measures all have advantages over variance and previous works using these parameters have shown improvements in the best portfolio selection. Purposed models are solved using genetic algorithm and for the topic completion, numerical example and plots to measure the performance of model in four dimensions are provided.


2018 ◽  
Vol 43 (2) ◽  
pp. 554-579 ◽  
Author(s):  
Daniel R. Jiang ◽  
Warren B. Powell

In this paper, we consider a finite-horizon Markov decision process (MDP) for which the objective at each stage is to minimize a quantile-based risk measure (QBRM) of the sequence of future costs; we call the overall objective a dynamic quantile-based risk measure (DQBRM). In particular, we consider optimizing dynamic risk measures where the one-step risk measures are QBRMs, a class of risk measures that includes the popular value at risk (VaR) and the conditional value at risk (CVaR). Although there is considerable theoretical development of risk-averse MDPs in the literature, the computational challenges have not been explored as thoroughly. We propose data-driven and simulation-based approximate dynamic programming (ADP) algorithms to solve the risk-averse sequential decision problem. We address the issue of inefficient sampling for risk applications in simulated settings and present a procedure, based on importance sampling, to direct samples toward the “risky region” as the ADP algorithm progresses. Finally, we show numerical results of our algorithms in the context of an application involving risk-averse bidding for energy storage. The online appendix is available at https://doi.org/10.1287/moor.2017.0872 .


Symmetry ◽  
2022 ◽  
Vol 14 (1) ◽  
pp. 138
Author(s):  
Wei Liu ◽  
Yang Liu

The tail risk management is of great significance in the investment process. As an extension of the asymmetric tail risk measure—Conditional Value at Risk (CVaR), higher moment coherent risk (HMCR) is compatible with the higher moment information (skewness and kurtosis) of probability distribution of the asset returns as well as capturing distributional asymmetry. In order to overcome the difficulties arising from the asymmetry and ambiguity of the underlying distribution, we propose the Wasserstein distributionally robust mean-HMCR portfolio optimization model based on the kernel smoothing method and optimal transport, where the ambiguity set is defined as a Wasserstein “ball” around the empirical distribution in the weighted kernel density estimation (KDE) distribution function family. Leveraging Fenchel’s duality theory, we obtain the computationally tractable DCP (difference-of-convex programming) reformulations and show that the ambiguity version preserves the asymmetry of the HMCR measure. Primary empirical test results for portfolio selection demonstrate the efficiency of the proposed model.


Sign in / Sign up

Export Citation Format

Share Document