scholarly journals On the application of Wishart process to the pricing of equity derivatives: the multi-asset case

Author(s):  
Gaetano La Bua ◽  
Daniele Marazzina

AbstractGiven the inherent complexity of financial markets, a wide area of research in the field of mathematical finance is devoted to develop accurate models for the pricing of contingent claims. Focusing on the stochastic volatility approach (i.e. we assume to describe asset volatility as an additional stochastic process), it appears desirable to introduce reliable dynamics in order to take into account the presence of several assets involved in the definition of multi-asset payoffs. In this article we deal with the multi asset Wishart Affine Stochastic Correlation model, that makes use of Wishart process to describe the stochastic variance covariance matrix of assets return. The resulting parametrization turns out to be a genuine multi-asset extension of the Heston model: each asset is exactly described by a single instance of the Heston dynamics while the joint behaviour is enriched by cross-assets and cross-variances stochastic correlation, all wrapped in an affine modeling. In this framework, we propose a fast and accurate calibration procedure, and two Monte Carlo simulation schemes.

Author(s):  
László Márkus ◽  
Ashish Kumar

Abstract Association or interdependence of two stock prices is analyzed, and selection criteria for a suitable model developed in the present paper. The association is generated by stochastic correlation, given by a stochastic differential equation (SDE), creating interdependent Wiener processes. These, in turn, drive the SDEs in the Heston model for stock prices. To choose from possible stochastic correlation models, two goodness-of-fit procedures are proposed based on the copula of Wiener increments. One uses the confidence domain for the centered Kendall function, and the other relies on strong and weak tail dependence. The constant correlation model and two different stochastic correlation models, given by Jacobi and hyperbolic tangent transformation of Ornstein-Uhlenbeck (HtanOU) processes, are compared by analyzing daily close prices for Apple and Microsoft stocks. The constant correlation, i.e., the Gaussian copula model, is unanimously rejected by the methods, but all other two are acceptable at a 95% confidence level. The analysis also reveals that even for Wiener processes, stochastic correlation can create tail dependence, unlike constant correlation, which results in multivariate normal distributions and hence zero tail dependence. Hence models with stochastic correlation are suitable to describe more dangerous situations in terms of correlation risk.


2015 ◽  
Vol 18 (06) ◽  
pp. 1550036 ◽  
Author(s):  
ELISA ALÒS ◽  
RAFAEL DE SANTIAGO ◽  
JOSEP VIVES

In this paper, we present a new, simple and efficient calibration procedure that uses both the short and long-term behavior of the Heston model in a coherent fashion. Using a suitable Hull and White-type formula, we develop a methodology to obtain an approximation to the implied volatility. Using this approximation, we calibrate the full set of parameters of the Heston model. One of the reasons that makes our calibration for short times to maturity so accurate is that we take into account the term structure for large times to maturity: We may thus say that calibration is not "memoryless," in the sense that the option's behavior far away from maturity does influence calibration when the option gets close to expiration. Our results provide a way to perform a quick calibration of a closed-form approximation to vanilla option prices, which may then be used to price exotic derivatives. The methodology is simple, accurate, fast and it requires a minimal computational effort.


2010 ◽  
Vol 27 (1) ◽  
pp. 108-121 ◽  
Author(s):  
Davide Dionisi ◽  
Fernando Congeduti ◽  
Gian Luigi Liberti ◽  
Francesco Cardillo

Abstract This paper presents a parametric automatic procedure to calibrate the multichannel Rayleigh–Mie–Raman lidar at the Institute for Atmospheric Science and Climate of the Italian National Research Council (ISAC-CNR) in Tor Vergata, Rome, Italy, using as a reference the operational 0000 UTC soundings at the WMO station 16245 (Pratica di Mare) located about 25 km southwest of the lidar site. The procedure, which is applied to both channels of the system, first identifies portions of the lidar and radiosonde profiles that are assumed to sample the same features of the water vapor profile, taking into account the different time and space sampling. Then, it computes the calibration coefficient with a best-fit procedure, weighted by the instrumental errors of both radiosounding and lidar. The parameters to be set in the procedure are described, and values adopted are discussed. The procedure was applied to a set of 57 sessions of nighttime 1-min-sampling lidar profiles (roughly about 300 h of measurements) covering the whole annual cycle (February 2007–September 2008). A calibration coefficient is computed for each measurement session. The variability of the calibration coefficients (∼10%) over periods with the same instrumental setting is reduced compared to the values obtained with the previously adopted, operator-assisted, and time-consuming calibration procedure. Reduction of variability, as well as the absence of evident trends, gives confidence both on system stability as well as on the developed procedure. Because of the definition of the calibration coefficient and of the different sampling between lidar and radiosonde, a contribution to the variability resulting from aerosol extinction and to the spatial and temporal variability of the water vapor mixing ratio is expected. A preliminary analysis aimed at identifying the contribution to the variability from these factors is presented. The parametric nature of the procedure makes it suitable for application to similar Raman lidar systems.


2007 ◽  
Vol 24 (10) ◽  
pp. 1785-1799 ◽  
Author(s):  
Sheldon Bacon ◽  
Fred Culkin ◽  
Nigel Higgs ◽  
Paul Ridout

Abstract Standard seawater (SSW) has been employed by oceanographers as a reference material in the determination of salinity for over a century. In all that time, this is the first study to determine the uncertainty of the SSW manufacturing process. SSW is calibrated in reference to carefully prepared solutions of potassium chloride (KCl). All uncertainties in the preparation and measurement of KCl solutions and of new SSW are calculated. The expanded uncertainty of the SSW conductivity ratio is found to be 1 × 10−5, based on a coverage factor of 2, at the time of manufacture. There is no discernible “within batch” variability. No significant variability of quality within or between batches of KCl is found. Measurement of SSW “offsets” from the label conductivity ratio as long as 5 yr after the SSW batch manufacture are reported, and no significant change in label conductivity ratio for SSW batches P130 through P144 outside the expanded uncertainty of 1 × 10−5 is found. This last result is in contrast to some other studies, and herein are suggestions as to why this may be the case.


2017 ◽  
Vol 20 (1) ◽  
pp. 1-17 ◽  
Author(s):  
M. Ferrante ◽  
C. Capponi

Abstract The numerical and analytical models used for transient simulations, and hence for the pressurized pipe system diagnosis, require the definition of a rheological component related to the pipe material. The introduction and the following widespread use of polymeric material pipes, characterized by a viscoelastic behavior, increased the complexity and the number of parameters involved in this component with respect to metallic materials. Furthermore, since tests on specimens are not reliable, a calibration procedure based on transient test is required to estimate the viscoelastic parameters. In this paper, the trade-off between viscoelastic component accuracy and simplicity is explored, based on the Akaike criterion. Several aspects of the calibration procedure are also examined, such as the use of a frequency domain numerical model and of different standard optimization algorithms. The procedure is tested on synthetic data and then it is applied to experimental data, acquired during transients on a high density polyethylene pipe. The results show that the best model among those used for the considered system implements the series of a spring with three Kelvin–Voigt elements.


ACTA IMEKO ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 48
Author(s):  
Denise De Zanet ◽  
Monica Battiston ◽  
Elisabetta Lombardi ◽  
Ruben Specogna ◽  
Francesco Trevisan ◽  
...  

<p class="Abstract">The accuracy of quantitative measurements represents an essential pre-requisite to the characterization and definition of the complex dynamic phenomena occurring in the field of cell biology. In research projects that involve the induction of blood coagulation under flow in microfluidic artificial channels, thrombus volume is an important quantity for estimation as a significant index related to the individual thrombotic risk profile. Concerning its importance in the early diagnosis of cardiovascular diseases, the estimated thrombus volume should reflect and represent reality. In 3D confocal microscopy, systematic errors can arise from distortions of the axial distance, whose accurate calibration remains a challenge. As a result, the 3D reconstructions show a noticeable axial elongation, and the volume measurements are thus overestimated. In this paper, a 400-600 % volume overestimation is demonstrated, and a new easy to use and automatic calibration procedure is outlined for this specific microfluidic and optical context. The adaptive algorithm proposed leads to the automatic compensation of the elongation error and to the accurate thrombus volume measurement. The method has been calibrated using fluorescent beads of known volume, validated with groups of several distinct platelets and finally applied on platelet thrombi.</p>


1993 ◽  
Vol 25 (6) ◽  
pp. 817-826 ◽  
Author(s):  
M B Gonçalves ◽  
I Ulysséa-Neto

In this paper the development of a new gravity—opportunity model for trip distribution is presented. Wilson's formalism to obtain the gravity model, and Schneider's intervening-opportunities model are used as the basis for deducing the new model. The notational difficulties associated with the amalgamation of the gravity and opportunity models are circunvented via the definition of an intervening-opportunity matrix. The conventional gravity model is shown to be a particular case of the new gravity—opportunity model. The calibration of the new model is reached by using Furness's matrix calibration procedure together with Hooke and Jeeves's nonlinear optimizing method. Finally, a practical application of the model for estimating intermunicipal passenger flows by public transport, in Southern Brazil, is reported.


2017 ◽  
Vol 6 (1) ◽  
pp. 3 ◽  
Author(s):  
Long Teng ◽  
Matthias Ehrhardt ◽  
Michael Günther

Sign in / Sign up

Export Citation Format

Share Document