CONIC PORTFOLIO THEORY

2016 ◽  
Vol 19 (03) ◽  
pp. 1650019 ◽  
Author(s):  
DILIP B. MADAN

Portfolios are designed to maximize a conservative market value or bid price for the portfolio. Theoretically this bid price is modeled as reflecting a convex cone of acceptable risks supporting an arbitrage free equilibrium of a two price economy. When risk acceptability is completely defined by the risk distribution function and bid prices are additive for comonotone risks, then these prices may be evaluated by a distorted expectation. The concavity of the distortion calibrates market risk attitudes. Procedures are outlined for observing the economic magnitudes for diversification benefits reflected in conservative valuation schemes. Optimal portfolios are formed for long only, long short and volatility constrained portfolios. Comparison with mean variance portfolios reflects lower concentration in conic portfolios that have comparable out of sample upside performance coupled with higher downside outcomes. Additionally the optimization problems are robust, employing directionally sensitive risk measures that are in the same units as the rewards. A further contribution is the ability to construct volatility constrained portfolios that attractively combine other dimensions of risk with rewards.

2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Yu Shi ◽  
Xia Zhao ◽  
Fengwei Jiang ◽  
Yipin Zhu

This paper aims to study stable portfolios with mean-variance-CVaR criteria for high-dimensional data. Combining different estimators of covariance matrix, computational methods of CVaR, and regularization methods, we construct five progressive optimization problems with short selling allowed. The impacts of different methods on out-of-sample performance of portfolios are compared. Results show that the optimization model with well-conditioned and sparse covariance estimator, quantile regression computational method for CVaR, and reweighted L1 norm performs best, which serves for stabilizing the out-of-sample performance of the solution and also encourages a sparse portfolio.


2019 ◽  
Vol 5 (1) ◽  
pp. 9-20
Author(s):  
Denis Dolinar ◽  
Davor Zoričić ◽  
Zrinka Lovretin Golubić

AbstractIn the field of portfolio management the focus has been on the out-of-sample estimation of the covariance matrix mainly because the estimation of expected return is much more challenging. However, recent research efforts have not only tried to improve the estimation of risk parameters by expanding the analysis beyond the mean-variance setting but also by testing whether risk measures can be used as proxies for the expected return in the stock market. In this research, we test the standard deviation (measure of total volatility) and the semi-deviation (measure of downside risk) as proxies for the expected market return in the illiquid and undeveloped Croatian stock market in the period from January 2005 until November 2017. In such an environment, the application of the proposed methodology yielded poor results, which helps explain the failure of the out-of-sample estimation of the maximum Sharpe ratio portfolio in earlier research in the Croatian equity market.


2019 ◽  
Vol 25 (3) ◽  
pp. 282-291
Author(s):  
Indana Lazulfa

Portfolio optimization is the process of allocating capital among a universe of assets to achieve better risk – return trade-off. Portfolio optimization is a solution for investors to get the return as large as possible and make the risk as small as possible. Due to the dynamic nature of financial markets, the portfolio needs to be rebalanced to retain the desired risk-return characteristics. This study proposed multi objective portfolio optimization model with risk, return as the objective function. For multi objective portfolio optimization problems will be used mean-variance model as risk measures. All these portfolio optimization problems will be solved by Firefly Algorithm (FA).


Mathematics ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 111
Author(s):  
Hyungbin Park

This paper proposes modified mean-variance risk measures for long-term investment portfolios. Two types of portfolios are considered: constant proportion portfolios and increasing amount portfolios. They are widely used in finance for investing assets and developing derivative securities. We compare the long-term behavior of a conventional mean-variance risk measure and a modified one of the two types of portfolios, and we discuss the benefits of the modified measure. Subsequently, an optimal long-term investment strategy is derived. We show that the modified risk measure reflects the investor’s risk aversion on the optimal long-term investment strategy; however, the conventional one does not. Several factor models are discussed as concrete examples: the Black–Scholes model, Kim–Omberg model, Heston model, and 3/2 stochastic volatility model.


2015 ◽  
Vol 50 (6) ◽  
pp. 1415-1441 ◽  
Author(s):  
Shingo Goto ◽  
Yan Xu

AbstractIn portfolio risk minimization, the inverse covariance matrix prescribes the hedge trades in which a stock is hedged by all the other stocks in the portfolio. In practice with finite samples, however, multicollinearity makes the hedge trades too unstable and unreliable. By shrinking trade sizes and reducing the number of stocks in each hedge trade, we propose a “sparse” estimator of the inverse covariance matrix. Comparing favorably with other methods (equal weighting, shrunk covariance matrix, industry factor model, nonnegativity constraints), a portfolio formed on the proposed estimator achieves significant out-of-sample risk reduction and improves certainty equivalent returns after transaction costs.


2021 ◽  
Author(s):  
Paul Embrechts ◽  
Alexander Schied ◽  
Ruodu Wang

We study issues of robustness in the context of Quantitative Risk Management and Optimization. We develop a general methodology for determining whether a given risk-measurement-related optimization problem is robust, which we call “robustness against optimization.” The new notion is studied for various classes of risk measures and expected utility and loss functions. Motivated by practical issues from financial regulation, special attention is given to the two most widely used risk measures in the industry, Value-at-Risk (VaR) and Expected Shortfall (ES). We establish that for a class of general optimization problems, VaR leads to nonrobust optimizers, whereas convex risk measures generally lead to robust ones. Our results offer extra insight on the ongoing discussion about the comparative advantages of VaR and ES in banking and insurance regulation. Our notion of robustness is conceptually different from the field of robust optimization, to which some interesting links are derived.


2009 ◽  
Vol 84 (6) ◽  
pp. 1983-2011 ◽  
Author(s):  
Alexander Nekrasov ◽  
Pervin K. Shroff

ABSTRACT: We propose a methodology to incorporate risk measures based on economic fundamentals directly into the valuation model. Fundamentals-based risk adjustment in the residual income valuation model is captured by the covariance of ROE with market-wide factors. We demonstrate a method of estimating covariance risk out of sample based on the accounting beta and betas of size and book-to-market factors in earnings. We show how the covariance risk estimate can be transformed to obtain the fundamentals-based cost of equity. Our empirical analysis shows that value estimates based on fundamental risk adjustment produce significantly smaller deviations from price relative to the CAPM or the Fama-French three-factor model. We further find that our single-factor risk measure, based on the accounting beta alone, captures aspects of risk that are indicated by the book-to-market factor, largely accounting for the “mispricing” of value and growth stocks. Our study highlights the usefulness of accounting numbers in pricing risk beyond their role as trackers of returns-based measures of risk.


2021 ◽  
Author(s):  
Behnam Malakooti ◽  
Mohamed Komaki ◽  
Camelia Al-Najjar

Many studies have spotlighted significant applications of expected utility theory (EUT), cumulative prospect theory (CPT), and mean-variance in assessing risks. We illustrate that these models and their extensions are unable to predict risk behaviors accurately in out-of-sample empirical studies. EUT uses a nonlinear value (utility) function of consequences but is linear in probabilities, which has been criticized as its primary weakness. Although mean-variance is nonlinear in probabilities, it is symmetric, contradicts first-order stochastic dominance, and uses the same standard deviation for both risk aversion and risk proneness. In this paper, we explore a special case of geometric dispersion theory (GDT) that is simultaneously nonlinear in both consequences and probabilities. It complies with first-order stochastic dominance and is asymmetric to represent the mixed risk-averse and risk-prone behaviors of the decision makers. GDT is a triad model that uses expected value, risk-averse dispersion, and risk-prone dispersion. GDT uses only two parameters, z and zX; these constants remain the same regardless of the scale of risk problem. We compare GDT to several other risk dispersion models that are based on EUT and/or mean-variance, and identify verified risk paradoxes that contradict EUT, CPT, and mean-variance but are easily explainable by GDT. We demonstrate that GDT predicts out-of-sample empirical risk behaviors far more accurately than EUT, CPT, mean-variance, and other risk dispersion models. We also discuss the underlying assumptions, meanings, and perspectives of GDT and how it reflects risk relativity and risk triad. This paper covers basic GDT, which is a special case of general GDT of Malakooti [Malakooti (2020) Geometric dispersion theory of decision making under risk: Generalizing EUT, RDEU, & CPT with out-of-sample empirical studies. Working paper, Case Western Reserve University, Cleveland.].


Risks ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 29 ◽  
Author(s):  
Andrea Rigamonti

Mean-variance portfolio optimization is more popular than optimization procedures that employ downside risk measures such as the semivariance, despite the latter being more in line with the preferences of a rational investor. We describe strengths and weaknesses of semivariance and how to minimize it for asset allocation decisions. We then apply this approach to a variety of simulated and real data and show that the traditional approach based on the variance generally outperforms it. The results hold even if the CVaR is used, because all downside risk measures are difficult to estimate. The popularity of variance as a measure of risk appears therefore to be rationally justified.


2020 ◽  
Vol 52 (1) ◽  
pp. 61-101
Author(s):  
Daniel Lacker

AbstractThis work is devoted to a vast extension of Sanov’s theorem, in Laplace principle form, based on alternatives to the classical convex dual pair of relative entropy and cumulant generating functional. The abstract results give rise to a number of probabilistic limit theorems and asymptotics. For instance, widely applicable non-exponential large deviation upper bounds are derived for empirical distributions and averages of independent and identically distributed samples under minimal integrability assumptions, notably accommodating heavy-tailed distributions. Other interesting manifestations of the abstract results include new results on the rate of convergence of empirical measures in Wasserstein distance, uniform large deviation bounds, and variational problems involving optimal transport costs, as well as an application to error estimates for approximate solutions of stochastic optimization problems. The proofs build on the Dupuis–Ellis weak convergence approach to large deviations as well as the duality theory for convex risk measures.


Sign in / Sign up

Export Citation Format

Share Document