scholarly journals Modified Mean-Variance Risk Measures for Long-Term Portfolios

Mathematics ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 111
Author(s):  
Hyungbin Park

This paper proposes modified mean-variance risk measures for long-term investment portfolios. Two types of portfolios are considered: constant proportion portfolios and increasing amount portfolios. They are widely used in finance for investing assets and developing derivative securities. We compare the long-term behavior of a conventional mean-variance risk measure and a modified one of the two types of portfolios, and we discuss the benefits of the modified measure. Subsequently, an optimal long-term investment strategy is derived. We show that the modified risk measure reflects the investor’s risk aversion on the optimal long-term investment strategy; however, the conventional one does not. Several factor models are discussed as concrete examples: the Black–Scholes model, Kim–Omberg model, Heston model, and 3/2 stochastic volatility model.

2013 ◽  
Vol 2013 ◽  
pp. 1-11 ◽  
Author(s):  
R. Company ◽  
L. Jódar ◽  
M. Fakharany ◽  
M.-C. Casabán

This paper deals with the numerical solution of option pricing stochastic volatility model described by a time-dependent, two-dimensional convection-diffusion reaction equation. Firstly, the mixed spatial derivative of the partial differential equation (PDE) is removed by means of the classical technique for reduction of second-order linear partial differential equations to canonical form. An explicit difference scheme with positive coefficients and only five-point computational stencil is constructed. The boundary conditions are adapted to the boundaries of the rhomboid transformed numerical domain. Consistency of the scheme with the PDE is shown and stepsize discretization conditions in order to guarantee stability are established. Illustrative numerical examples are included.


2015 ◽  
Author(s):  
Γεώργιος Παπαγιάννης

The main aim of the present thesis is to investigate the effect of diverging priors concerning model uncertainty on decision making. One of the main issues in the thesis is to assess the effect of different notions of distance in the space of probability measures and their use as loss functionals in the process of identifying the best suited model among a set of plausible priors. Another issue, is that of addressing the problem of ``inhomogeneous" sets of priors, i.e. sets of priors that highly divergent opinions may occur, and the need to robustly treat that case. As high degrees of inhomogeneity may lead to distrust of the decision maker to the priors it may be desirable to adopt a particular prior corresponding to the set which somehow minimizes the ``variability" among the models on the set. This leads to the notion of Frechet risk measure. Finally, an important problem is the actual calculation of robust risk measures. An account of their variational definition, the problem of calculation leads to the numerical treatment of problems of the calculus of variations for which reliable and effective algorithms are proposed. The contributions of the thesis are presented in the following three chapters. In Chapter 2, a statistical learning scheme is introduced for constructing the best model compatible with a set of priors provided by different information sources of varying reliability. As various priors may model well different aspects of the phenomenon the proposed scheme is a variational scheme based on the minimization of a weighted loss function in the space of probability measures which in certain cases is shown to be equivalent to weighted quantile averaging schemes. Therefore in contrast to approaches such as minimax decision theory in which a particular element of the prior set is chosen we construct for each prior set a probability measure which is not necessarily an element of it, a fact that as shown may lead to better description of the phenomenon in question. While treating this problem we also address the issue of the effect of the choice of distance functional in the space of measures on the problem of model selection. One of the key findings in this respect is that the class of Wasserstein distances seems to have the best performance as compared to other distances such as the KL-divergence. In Chapter 3, motivated by the results of Chapter 2, we treat the problem of specifying the risk measure for a particular loss when a set of highly divergent priors concerning the distribution of the loss is available. Starting from the principle that the ``variability" of opinions is not welcome, a fact for which a strong axiomatic framework is provided (see e.g. Klibanoff (2005) and references therein) we introduce the concept of Frechet risk measures, which corresponds to a minimal variance risk measure. Here we view a set of priors as a discrete measure on the space of probability measures and by variance we mean the variance of this discrete probability measure. This requires the use of the concept of Frechet mean. By different metrizations of the space of probability measures we define a variety of Frechet risk measures, the Wasserstein, the Hellinger and the weighted entropic risk measure, and illustrate their use and performance via an example related to the static hedging of derivatives under model uncertainty. In Chapter 4, we consider the problem of numerical calculation of convex risk measures applying techniques from the calculus of variations. Regularization schemes are proposed and the theoretical convergence of the algorithms is considered.


2019 ◽  
Vol 22 (04) ◽  
pp. 1950009
Author(s):  
XIN-JIANG HE ◽  
SONG-PING ZHU

In this paper, the pricing problem of variance and volatility swaps is discussed under a two-factor stochastic volatility model. This model can be treated as a two-factor Heston model with one factor following the CIR process and another characterized by a Markov chain, with the motivation originating from the popularity of the Heston model and the strong evidence of the existence of regime switching in real markets. Based on the derived forward characteristic function of the underlying price, analytical pricing formulae for variance and volatility swaps are presented, and numerical experiments are also conducted to compare swap prices calculated through our formulae and those obtained under the Heston model to show whether the introduction of the regime switching factor would lead to any significant difference.


2016 ◽  
Vol 57 (3) ◽  
pp. 244-268
Author(s):  
SANAE RUJIVAN

The main purpose of this paper is to present a novel analytical approach for pricing discretely sampled gamma swaps, defined in terms of weighted variance swaps of the underlying asset, based on Heston’s two-factor stochastic volatility model. The closed-form formula obtained in this paper is in a much simpler form than those proposed in the literature, which substantially reduces the computational burden and can be implemented efficiently. The solution procedure presented in this paper can be adopted to derive closed-form solutions for pricing various types of weighted variance swaps, such as self-quantoed variance and entropy swaps. Most interestingly, we discuss the validity of the current solutions in the parameter space, and provide market practitioners with some remarks for trading these types of weighted variance swaps.


2017 ◽  
Vol 20 (08) ◽  
pp. 1750055 ◽  
Author(s):  
ZHENYU CUI ◽  
J. LARS KIRKBY ◽  
GUANGHUA LIAN ◽  
DUY NGUYEN

This paper contributes a generic probabilistic method to derive explicit exact probability densities for stochastic volatility models. Our method is based on a novel application of the exponential measure change in [Z. Palmowski & T. Rolski (2002) A technique for exponential change of measure for Markov processes, Bernoulli 8(6), 767–785]. With this generic approach, we first derive explicit probability densities in terms of model parameters for several stochastic volatility models with nonzero correlations, namely the Heston 1993, [Formula: see text], and a special case of the [Formula: see text]-Hypergeometric stochastic volatility models recently proposed by [J. Da Fonseca & C. Martini (2016) The [Formula: see text]-Hypergeometric stochastic volatility model, Stochastic Processes and their Applications 126(5), 1472–1502]. Then, we combine our method with a stochastic time change technique to develop explicit formulae for prices of timer options in the Heston model, the [Formula: see text] model and a special case of the [Formula: see text]-Hypergeometric model.


2012 ◽  
Vol 15 (05) ◽  
pp. 1250033 ◽  
Author(s):  
M. COSTABILE ◽  
I. MASSABÒ ◽  
E. RUSSO

This article presents a lattice based approach for pricing contingent claims when the underlying asset evolves according to the double Heston (dH) stochastic volatility model introduced by Christoffersen et al. (2009). We discretize the continuous evolution of both squared volatilities by a "binomial pyramid", and consider the asset value as an auxiliary state variable for which a subset of possible realizations is attached to each node of the pyramid. The elements of the subset cover the range of asset prices at each time slice, and claim price is computed solving backward through the "binomial pyramid". Numerical experiments confirm the accuracy and efficiency of the proposed model.


2008 ◽  
Vol 40 (01) ◽  
pp. 144-162 ◽  
Author(s):  
Elisa Alòs ◽  
Christian-Oliver Ewald

We prove that the Heston volatility is Malliavin differentiable under the classical Novikov condition and give an explicit expression for the derivative. This result guarantees the applicability of Malliavin calculus in the framework of the Heston stochastic volatility model. Furthermore, we derive conditions on the parameters which assure the existence of the second Malliavin derivative of the Heston volatility. This allows us to apply recent results of Alòs (2006) in order to derive approximate option pricing formulae in the context of the Heston model. Numerical results are given.


2017 ◽  
Vol 2017 ◽  
pp. 1-13 ◽  
Author(s):  
Shuo Yang ◽  
Kai Yang ◽  
Ziyou Gao ◽  
Lixing Yang ◽  
Jungang Shi

Traditional models of timetable generation for last trains do not account for the fact that decision-maker (DM) often incorporates transfer demand variability within his/her decision-making process. This study aims to develop such a model with particular consideration of the decision-makers’ risk preferences in subway systems under uncertainty. First, we formulate an optimization model for last-train timetabling based on mean-variance (MV) theory that explicitly considers two significant factors including the number of successful transfer passengers and the running time of last trains. Then, we add the mean-variance risk measure into the model to generate timetables by adjusting the last trains’ departure times and running times for each line. Furthermore, we normalize two heterogeneous terms of the risk measure to provide assistance in getting reasonable results. Due to the complexity of MV model, we design a tabu search (TS) algorithm with specifically designed operators to solve the proposed timetabling problem. Through computational experiments involving the Beijing subway system, we demonstrate the computational efficiency of the proposed MV model and the heuristic approach.


2016 ◽  
Vol 19 (05) ◽  
pp. 1650031 ◽  
Author(s):  
NICOLAS LANGRENÉ ◽  
GEOFFREY LEE ◽  
ZILI ZHU

We examine the inverse gamma (IGa) stochastic volatility model with time-dependent parameters. This nonaffine model compares favorably in terms of volatility distribution and volatility paths to classical affine models such as the Heston model, while being as parsimonious (only four stochastic parameters). In practice, this means more robust calibration and better hedging, explaining its popularity among practitioners. Closed-form volatility-of-volatility expansions are obtained for the price of vanilla options, which allow for very fast pricing and calibration to market data. Specifically, the price of a European put option with IGa volatility is approximated by a Black–Scholes price plus a weighted combination of Black–Scholes Greeks, with weights depending only on the four time-dependent parameters of the model. The accuracy of the expansion is illustrated on several calibration tests on foreign exchange market data. This paper shows that the IGa model is as simple, more realistic, easier to implement and faster to calibrate than classical transform-based affine models. We therefore hope that the present work will foster further research on nonaffine models favored by practitioners such as the IGa model.


Sign in / Sign up

Export Citation Format

Share Document