A study of risk-adjusted stock selection models using genetic algorithms

2014 ◽  
Vol 31 (8) ◽  
pp. 1720-1731 ◽  
Author(s):  
Chien-Feng Huang ◽  
Tsung-Nan Hsieh ◽  
Bao Rong Chang ◽  
Chih-Hsiang Chang

Purpose – Stock selection has long been identified as a challenging task. This line of research is highly contingent upon reliable stock ranking for successful portfolio construction. The purpose of this paper is to employ the methods from computational intelligence (CI) to solve this problem more effectively. Design/methodology/approach – The authors develop a risk-adjusted strategy to improve upon the previous stock selection models by two main risk measures – downside risk and variation in returns. Moreover, the authors employ the genetic algorithm for optimization of model parameters and selection for input variables simultaneously. Findings – It is found that the proposed risk-adjusted methodology via maximum drawdown significantly outperforms the benchmark and improves the previous model in the performance of stock selection. Research limitations/implications – Future work considers an extensive study for the risk-adjusted model using other risk measures such as Value at Risk, Block Maxima, etc. The authors also intend to use financial data from other countries, if available, in order to assess if the method is generally applicable and robust across different environments. Practical implications – The authors expect this risk-adjusted model to advance the CI research for financial engineering and provide an promising solutions to stock selection in practice. Originality/value – The originality of this work is that maximum drawdown is being successfully incorporated into the CI-based stock selection model in which the model's effectiveness is validated with strong statistical evidence.

2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Xue Deng ◽  
Weimin Li

Purpose This paper aims to propose two portfolio selection models with hesitant value-at-risk (HVaR) – HVaR fuzzy portfolio selection model (HVaR-FPSM) and HVaR-score fuzzy portfolio selection model (HVaR-S-FPSM) – to help investors solve the problem that how bad a portfolio can be under probabilistic hesitant fuzzy environment. Design/methodology/approach It is strictly proved that the higher the probability threshold, the higher the HVaR in HVaR-S-FPSM. Numerical examples and a case study are used to illustrate the steps of building the proposed models and the importance of the HVaR and score constraint. In case study, the authors conduct a sensitivity analysis and compare the proposed models with decision-making models and hesitant fuzzy portfolio models. Findings The score constraint can make sure that the portfolio selected is profitable, but will not cause the HVaR to decrease dramatically. The investment proportions of stocks are mainly affected by their HVaRs, which is consistent with the fact that the stock having good performance is usually desirable in portfolio selection. The HVaR-S-FPSM can find portfolios with higher HVaR than each single stock and has little sacrifice of extreme returns. Originality/value This paper fulfills a need to construct portfolio selection models with HVaR under probabilistic hesitant fuzzy environment. As a downside risk, the HVaR is more consistent with investors’ intuitions about risks. Moreover, the score constraint makes sure that undesirable portfolios will not be selected.


2018 ◽  
Vol 19 (2) ◽  
pp. 127-136 ◽  
Author(s):  
Stavros Stavroyiannis

Purpose The purpose of this paper is to examine the value-at-risk and related measures for the Bitcoin and to compare the findings with Standard and Poor’s SP500 Index, and the gold spot price time series. Design/methodology/approach A GJR-GARCH model has been implemented, in which the residuals follow the standardized Pearson type-IV distribution. A large variety of value-at-risk measures and backtesting criteria are implemented. Findings Bitcoin is a highly volatile currency violating the value-at-risk measures more than the other assets. With respect to the Basel Committee on Banking Supervision Accords, a Bitcoin investor is subjected to higher capital requirements and capital allocation ratio. Practical implications The risk of an investor holding Bitcoins is measured and quantified via the regulatory framework practices. Originality/value This paper is the first comprehensive approach to the risk properties of Bitcoin.


2020 ◽  
Vol 12 (1) ◽  
pp. 69-87 ◽  
Author(s):  
Farrukh Naveed ◽  
Idrees Khawaja ◽  
Lubna Maroof

Purpose This study aims to comparatively analyze the systematic, idiosyncratic and downside risk exposure of both Islamic and conventional funds in Pakistan to see which of the funds has higher risk exposure. Design/methodology/approach The study analyzes different types of risks involved in both Islamic and conventional funds for the period from 2009 to 2016 by using different risk measures. For systematic and idiosyncratic risk single factor CAPM and multifactor models such as Fama French three factors model and Carhart four factors model are used. For downside risk analysis different measures such as downside beta, relative beta, value at risk and expected short fall are used. Findings The study finds that Islamic funds have lower risk exposure (including total, systematic, idiosyncratic and downside risk) compared with their conventional counterparts in most of the sample years, and hence, making them appear more attractive for investment especially for Sharīʿah-compliant investors preferring low risk preferences. Practical implications As this study shows, Islamic mutual funds exhibit lower risk exposure than their conventional counterparts so investors with lower risk preferences can invest in these kinds of funds. In this way, this research provides the input to the individual investors (especially Sharīʿah-compliant investors who want to avoid interest based investment) to help them with their investment decisions as they can make a more diversified portfolio by considering Islamic funds as a mean for reducing the risk exposure. Originality/value To the best of the author’s knowledge, this study is the first attempt at world level in looking at the comparative risk analysis of various types of the risks as follows: systematic, idiosyncratic and downside risk, for both Islamic and conventional funds, and thus, provides significant contribution in the literature of mutual funds.


2021 ◽  
Vol 14 (11) ◽  
pp. 542
Author(s):  
Jaehyung Choi

We empirically test predictability on asset price using stock selection rules based on maximum drawdown and its consecutive recovery. In various equity markets, monthly momentum- and weekly contrarian-style portfolios constructed from these alternative selection criteria are superior not only in forecasting directions of asset prices but also in capturing cross-sectional return differentials. In monthly periods, the alternative portfolios ranked by maximum drawdown measures exhibit outperformance over other alternative momentum portfolios including traditional cumulative return-based momentum portfolios. In weekly time scales, recovery-related stock selection rules are the best ranking criteria for detecting mean-reversion. For the alternative portfolios and their ranking baskets, improved risk profiles in various reward-risk measures also imply more consistent prediction on the direction of assets in future. Moreover, turnover rates of these momentum/contrarian portfolios are also reduced with respect to the benchmark portfolios. In the Carhart four-factor analysis, higher factor-neutral intercepts for the alternative strategies are another evidence for the robust prediction by the alternative stock selection rules.


2021 ◽  
Vol 9 (4) ◽  
pp. 910-941
Author(s):  
Abd-Elmonem A. M. Teamah ◽  
Ahmed A. Elbanna ◽  
Ahmed M. Gemeay

Heavy tailed distributions have a big role in studying risk data sets. Statisticians in many cases search and try to find new or relatively new statistical models to fit data sets in different fields. This article introduced a relatively new heavy-tailed statistical model by using alpha power transformation and exponentiated log-logistic distribution which called alpha power exponentiated log-logistic distribution. Its statistical properties were derived mathematically such as moments, moment generating function, quantile function, entropy, inequality curves and order statistics. Five estimation methods were introduced mathematically and the behaviour of the proposed model parameters was checked by randomly generated data sets and these estimation methods. Also, some actuarial measures were deduced mathematically such as value at risk, tail value at risk, tail variance and tail variance premium. Numerical values for these measures were performed and proved that the proposed distribution has a heavier tail than others compared models. Finally, three real data sets from different fields were used to show how these proposed models fitting these data sets than other many wells known and related models.


2018 ◽  
Vol 34 (2) ◽  
pp. 217-222
Author(s):  
Soo-Hyun Kim

Measuring risk is the key component in many asset pricing models. Although volatility is the most widely used measure for the risk, Value at Risk (VaR) and Maximum drawdown (MDD) are also considered as alternative risk measure. This article questions whether VaR and MDD contain additional information to volatility in equity market. The empirical analysis is conducted using the stocks listed in Korean stock market. By constructing portfolios in accordance with three risk measures, cross-sectional predictability is tested. The primary findings are as follow; (1) the return patterns are bell shaped in all measures and (2) VaR and MDD do not capture additional risk factors after conditioning volatility.


2017 ◽  
Vol 18 (1) ◽  
pp. 76-87 ◽  
Author(s):  
Ngoc Quynh Anh Nguyen ◽  
Thi Ngoc Trang Nguyen

Purpose The purpose of this paper is to present the method for efficient computation of risk measures using Fourier transform technique. Another objective is to demonstrate that this technique enables an efficient computation of risk measures beyond value-at-risk and expected shortfall. Finally, this paper highlights the importance of validating assumptions behind the risk model and describes its application in the affine model framework. Design/methodology/approach The method proposed is based on Fourier transform methods for computing risk measures. The authors obtain the loss distribution by fitting a cubic spline through the points where Fourier inversion of the characteristic function is applied. From the loss distribution, the authors calculate value-at-risk and expected shortfall. As for the calculation of the entropic value-at-risk, it involves the moment generating function which is closely related to the characteristic function. The expectile risk measure is calculated based on call and put option prices which are available in a semi-closed form by Fourier inversion of the characteristic function. We also consider mean loss, standard deviation and semivariance which are calculated in a similar manner. Findings The study offers practical insights into the efficient computation of risk measures as well as validation of the risk models. It also provides a detailed description of algorithms to compute each of the risk measures considered. While the main focus of the paper is on portfolio-level risk metrics, all algorithms are also applicable to single instruments. Practical implications The algorithms presented in this paper require little computational effort which makes them very suitable for real-world applications. In addition, the mathematical setup adopted in this paper provides a natural framework for risk model validation which makes the approach presented in this paper particularly appealing in practice. Originality/value This is the first study to consider the computation of entropic value-at-risk, semivariance as well as expectile risk measure using Fourier transform method.


2021 ◽  
pp. 1-30
Author(s):  
Hansjörg Albrecher ◽  
José Carlos Araujo-Acuna ◽  
Jan Beirlant

Abstract In various applications of heavy-tail modelling, the assumed Pareto behaviour is tempered ultimately in the range of the largest data. In insurance applications, claim payments are influenced by claim management and claims may, for instance, be subject to a higher level of inspection at highest damage levels leading to weaker tails than apparent from modal claims. Generalizing earlier results of Meerschaert et al. (2012) and Raschke (2020), in this paper we consider tempering of a Pareto-type distribution with a general Weibull distribution in a peaks-over-threshold approach. This requires to modulate the tempering parameters as a function of the chosen threshold. Modelling such a tempering effect is important in order to avoid overestimation of risk measures such as the value-at-risk at high quantiles. We use a pseudo maximum likelihood approach to estimate the model parameters and consider the estimation of extreme quantiles. We derive basic asymptotic results for the estimators, give illustrations with simulation experiments and apply the developed techniques to fire and liability insurance data, providing insight into the relevance of the tempering component in heavy-tail modelling.


Author(s):  
Sheri Markose ◽  
Simone Giansante ◽  
Nicolas A. Eterovic ◽  
Mateusz Gatkowski

AbstractWe analyse systemic risk in the core global banking system using a new network-based spectral eigen-pair method, which treats network failure as a dynamical system stability problem. This is compared with market price-based Systemic Risk Indexes, viz. Marginal Expected Shortfall, Delta Conditional Value-at-Risk, and Conditional Capital Shortfall Measure of Systemic Risk in a cross-border setting. Unlike paradoxical market price based risk measures, which underestimate risk during periods of asset price booms, the eigen-pair method based on bilateral balance sheet data gives early-warning of instability in terms of the tipping point that is analogous to the R number in epidemic models. For this regulatory capital thresholds are used. Furthermore, network centrality measures identify systemically important and vulnerable banking systems. Market price-based SRIs are contemporaneous with the crisis and they are found to covary with risk measures like VaR and betas.


2017 ◽  
Vol 37 (1) ◽  
pp. 1-12 ◽  
Author(s):  
Haluk Ay ◽  
Anthony Luscher ◽  
Carolyn Sommerich

Purpose The purpose of this study is to design and develop a testing device to simulate interaction between human hand–arm dynamics, right-angle (RA) computer-controlled power torque tools and joint-tightening task-related variables. Design/methodology/approach The testing rig can simulate a variety of tools, tasks and operator conditions. The device includes custom data-acquisition electronics and graphical user interface-based software. The simulation of the human hand–arm dynamics is based on the rig’s four-bar mechanism-based design and mechanical components that provide adjustable stiffness (via pneumatic cylinder) and mass (via plates) and non-adjustable damping. The stiffness and mass values used are based on an experimentally validated hand–arm model that includes a database of model parameters. This database is with respect to gender and working posture, corresponding to experienced tool operators from a prior study. Findings The rig measures tool handle force and displacement responses simultaneously. Peak force and displacement coefficients of determination (R2) between rig estimations and human testing measurements were 0.98 and 0.85, respectively, for the same set of tools, tasks and operator conditions. The rig also provides predicted tool operator acceptability ratings, using a data set from a prior study of discomfort in experienced operators during torque tool use. Research limitations/implications Deviations from linearity may influence handle force and displacement measurements. Stiction (Coulomb friction) in the overall rig, as well as in the air cylinder piston, is neglected. The rig’s mechanical damping is not adjustable, despite the fact that human hand–arm damping varies with respect to gender and working posture. Deviations from these assumptions may affect the correlation of the handle force and displacement measurements with those of human testing for the same tool, task and operator conditions. Practical implications This test rig will allow the rapid assessment of the ergonomic performance of DC torque tools, saving considerable time in lineside applications and reducing the risk of worker injury. DC torque tools are an extremely effective way of increasing production rate and improving torque accuracy. Being a complex dynamic system, however, the performance of DC torque tools varies in each application. Changes in worker mass, damping and stiffness, as well as joint stiffness and tool program, make each application unique. This test rig models all of these factors and allows quick assessment. Social implications The use of this tool test rig will help to identify and understand risk factors that contribute to musculoskeletal disorders (MSDs) associated with the use of torque tools. Tool operators are subjected to large impulsive handle reaction forces, as joint torque builds up while tightening a fastener. Repeated exposure to such forces is associated with muscle soreness, fatigue and physical stress which are also risk factors for upper extremity injuries (MSDs; e.g. tendinosis, myofascial pain). Eccentric exercise exertions are known to cause damage to muscle tissue in untrained individuals and affect subsequent performance. Originality/value The rig provides a novel means for quantitative, repeatable dynamic evaluation of RA powered torque tools and objective selection of tightening programs. Compared to current static tool assessment methods, dynamic testing provides a more realistic tool assessment relative to the tool operator’s experience. This may lead to improvements in tool or controller design and reduction in associated musculoskeletal discomfort in operators.


Sign in / Sign up

Export Citation Format

Share Document