scholarly journals Optimizing Expected Shortfall under an ℓ1 Constraint—An Analytic Approach

Entropy ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. 523
Author(s):  
Gábor Papp ◽  
Imre Kondor ◽  
Fabio Caccioli

Expected Shortfall (ES), the average loss above a high quantile, is the current financial regulatory market risk measure. Its estimation and optimization are highly unstable against sample fluctuations and become impossible above a critical ratio r=N/T, where N is the number of different assets in the portfolio, and T is the length of the available time series. The critical ratio depends on the confidence level α, which means we have a line of critical points on the α−r plane. The large fluctuations in the estimation of ES can be attenuated by the application of regularizers. In this paper, we calculate ES analytically under an ℓ1 regularizer by the method of replicas borrowed from the statistical physics of random systems. The ban on short selling, i.e., a constraint rendering all the portfolio weights non-negative, is a special case of an asymmetric ℓ1 regularizer. Results are presented for the out-of-sample and the in-sample estimator of the regularized ES, the estimation error, the distribution of the optimal portfolio weights, and the density of the assets eliminated from the portfolio by the regularizer. It is shown that the no-short constraint acts as a high volatility cutoff, in the sense that it sets the weights of the high volatility elements to zero with higher probability than those of the low volatility items. This cutoff renormalizes the aspect ratio r=N/T, thereby extending the range of the feasibility of optimization. We find that there is a nontrivial mapping between the regularized and unregularized problems, corresponding to a renormalization of the order parameters.

Author(s):  
Imre Kondor ◽  
Fabio Caccioli ◽  
Gabor Papp ◽  
Matteo Marsili

PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245904
Author(s):  
Viviane Naimy ◽  
Omar Haddad ◽  
Gema Fernández-Avilés ◽  
Rim El Khoury

This paper provides a thorough overview and further clarification surrounding the volatility behavior of the major six cryptocurrencies (Bitcoin, Ripple, Litecoin, Monero, Dash and Dogecoin) with respect to world currencies (Euro, British Pound, Canadian Dollar, Australian Dollar, Swiss Franc and the Japanese Yen), the relative performance of diverse GARCH-type specifications namely the SGARCH, IGARCH (1,1), EGARCH (1,1), GJR-GARCH (1,1), APARCH (1,1), TGARCH (1,1) and CGARCH (1,1), and the forecasting performance of the Value at Risk measure. The sampled period extends from October 13th 2015 till November 18th 2019. The findings evidenced the superiority of the IGARCH model, in both the in-sample and the out-of-sample contexts, when it deals with forecasting the volatility of world currencies, namely the British Pound, Canadian Dollar, Australian Dollar, Swiss Franc and the Japanese Yen. The CGARCH alternative modeled the Euro almost perfectly during both periods. Advanced GARCH models better depicted asymmetries in cryptocurrencies’ volatility and revealed persistence and “intensifying” levels in their volatility. The IGARCH was the best performing model for Monero. As for the remaining cryptocurrencies, the GJR-GARCH model proved to be superior during the in-sample period while the CGARCH and TGARCH specifications were the optimal ones in the out-of-sample interval. The VaR forecasting performance is enhanced with the use of the asymmetric GARCH models. The VaR results provided a very accurate measure in determining the level of downside risk exposing the selected exchange currencies at all confidence levels. However, the outcomes were far from being uniform for the selected cryptocurrencies: convincing for Dash and Dogcoin, acceptable for Litecoin and Monero and unconvincing for Bitcoin and Ripple, where the (optimal) model was not rejected only at the 99% confidence level.


2009 ◽  
Vol 84 (6) ◽  
pp. 1983-2011 ◽  
Author(s):  
Alexander Nekrasov ◽  
Pervin K. Shroff

ABSTRACT: We propose a methodology to incorporate risk measures based on economic fundamentals directly into the valuation model. Fundamentals-based risk adjustment in the residual income valuation model is captured by the covariance of ROE with market-wide factors. We demonstrate a method of estimating covariance risk out of sample based on the accounting beta and betas of size and book-to-market factors in earnings. We show how the covariance risk estimate can be transformed to obtain the fundamentals-based cost of equity. Our empirical analysis shows that value estimates based on fundamental risk adjustment produce significantly smaller deviations from price relative to the CAPM or the Fama-French three-factor model. We further find that our single-factor risk measure, based on the accounting beta alone, captures aspects of risk that are indicated by the book-to-market factor, largely accounting for the “mispricing” of value and growth stocks. Our study highlights the usefulness of accounting numbers in pricing risk beyond their role as trackers of returns-based measures of risk.


Author(s):  
Markus Haas ◽  
Ji-Chun Liu

AbstractWe consider a multivariate Markov-switching GARCH model which allows for regime-specific volatility dynamics, leverage effects, and correlation structures. Conditions for stationarity and expressions for the moments of the process are derived. A Lagrange Multiplier test against misspecification of the within-regime correlation dynamics is proposed, and a simple recursion for multi-step-ahead conditional covariance matrices is deduced. We use this methodology to model the dynamics of the joint distribution of global stock market and real estate equity returns. The empirical analysis highlights the importance of the conditional distribution in Markov-switching time series models. Specifications with Student’stinnovations dominate their Gaussian counterparts both in- and out-of-sample. The dominating specification appears to be a two-regime Student’stprocess with correlations which are higher in the turbulent (high-volatility) regime.


2016 ◽  
Vol 48 (2) ◽  
pp. 148-172 ◽  
Author(s):  
KUNLAPATH SUKCHAROEN ◽  
DAVID LEATHAM

AbstractOne of the most popular risk management strategies for wheat producers is varietal diversification. Previous studies proposed a mean-variance model as a tool to optimally select wheat varieties. However, this study suggests that the mean–expected shortfall (ES) model (which is based on a downside risk measure) may be a better tool because variance is not a correct risk measure when the distribution of wheat variety yields is multivariate nonnormal. Results based on data from Texas Blacklands confirm our conjecture that the mean-ES framework performs better in term of selecting wheat varieties than the mean-variance method.


2017 ◽  
Vol 18 (8) ◽  
pp. 1295-1313 ◽  
Author(s):  
Fabio Caccioli ◽  
Imre Kondor ◽  
Gábor Papp

2010 ◽  
Vol 13 (03) ◽  
pp. 425-437 ◽  
Author(s):  
IMRE KONDOR ◽  
ISTVÁN VARGA-HASZONITS

It is shown that the axioms for coherent risk measures imply that whenever there is a pair of portfolios such that one of them dominates the other in a given sample (which happens with finite probability even for large samples), then there is no optimal portfolio under any coherent measure on that sample, and the risk measure diverges to minus infinity. This instability was first discovered in the special example of Expected Shortfall which is used here both as an illustration and as a springboard for generalization.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Ziting Pei ◽  
Xuhui Wang ◽  
Xingye Yue

G-expected shortfall (G-ES), which is a new type of worst-case expected shortfall (ES), is defined as measuring risk under infinite distributions induced by volatility uncertainty. Compared with extant notions of the worst-case ES, the G-ES can be computed using an explicit formula with low computational cost. We also conduct backtests for the G-ES. The empirical analysis demonstrates that the G-ES is a reliable risk measure.


2010 ◽  
Vol 8 (2) ◽  
pp. 141 ◽  
Author(s):  
André Alves Portela Santos

Robust optimization has been receiving increased attention in the recent few years due to the possibility of considering the problem of estimation error in the portfolio optimization problem. A question addressed so far by very few works is whether this approach is able to outperform traditional portfolio optimization techniques in terms of out-of-sample performance. Moreover, it is important to know whether this approach is able to deliver stable portfolio compositions over time, thus reducing management costs and facilitating practical implementation. We provide empirical evidence by assessing the out-of-sample performance and the stability of optimal portfolio compositions obtained with robust optimization and with traditional optimization techniques. The results indicated that, for simulated data, robust optimization performed better (both in terms of Sharpe ratios and portfolio turnover) than Markowitz's mean-variance portfolios and similarly to minimum-variance portfolios. The results for real market data indicated that the differences in risk-adjusted performance were not statistically different, but the portfolio compositions associated to robust optimization were more stable over time than traditional portfolio selection techniques.


Sign in / Sign up

Export Citation Format

Share Document