Green Simulation with Database Monte Carlo

2021 ◽  
Vol 31 (1) ◽  
pp. 1-26
Author(s):  
Mingbin Feng ◽  
Jeremy Staum

In a setting in which experiments are performed repeatedly with the same simulation model, green simulation means reusing outputs from previous experiments to answer the question currently being asked of the model. In this article, we address the setting in which experiments are run to answer questions quickly, with a time limit providing a fixed computational budget, and then idle time is available for further experimentation before the next question is asked. The general strategy is database Monte Carlo for green simulation: the output of experiments is stored in a database and used to improve the computational efficiency of future experiments. In this article, the database provides a quasi-control variate, which reduces the variance of the estimated mean response in a future experiment that has a fixed computational budget. We propose a particular green simulation procedure using quasi-control variates, addressing practical issues such as experiment design, and analyze its theoretical properties. We show that, under some conditions, the variance of the estimated mean response in an experiment with a fixed computational budget drops to zero over a sequence of repeated experiments, as more and more idle time is invested in creating databases. Our numerical experiments on the procedure show that using idle time to create databases of simulation output provides variance reduction immediately, and that the variance reduction grows over time in a way that is consistent with the convergence analysis.


2019 ◽  
Vol 11 (3) ◽  
pp. 815 ◽  
Author(s):  
Yijuan Liang ◽  
Xiuchuan Xu

Pricing multi-asset options has always been one of the key problems in financial engineering because of their high dimensionality and the low convergence rates of pricing algorithms. This paper studies a method to accelerate Monte Carlo (MC) simulations for pricing multi-asset options with stochastic volatilities. First, a conditional Monte Carlo (CMC) pricing formula is constructed to reduce the dimension and variance of the MC simulation. Then, an efficient martingale control variate (CV), based on the martingale representation theorem, is designed by selecting volatility parameters in the approximated option price for further variance reduction. Numerical tests illustrated the sensitivity of the CMC method to correlation coefficients and the effectiveness and robustness of our martingale CV method. The idea in this paper is also applicable for the valuation of other derivatives with stochastic volatility.



Mathematics ◽  
2021 ◽  
Vol 9 (22) ◽  
pp. 2930
Author(s):  
Siow Woon Jeng ◽  
Adem Kiliçman

The rough Heston model is a form of a stochastic Volterra equation, which was proposed to model stock price volatility. It captures some important qualities that can be observed in the financial market—highly endogenous, statistical arbitrages prevention, liquidity asymmetry, and metaorders. Unlike stochastic differential equation, the stochastic Volterra equation is extremely computationally expensive to simulate. In other words, it is difficult to compute option prices under the rough Heston model by conventional Monte Carlo simulation. In this paper, we prove that Euler’s discretization method for the stochastic Volterra equation with non-Lipschitz diffusion coefficient error[|Vt−Vtn|p] is finitely bounded by an exponential function of t. Furthermore, the weak error |error[Vt−Vtn]| and convergence for the stochastic Volterra equation are proven at the rate of O(n−H). In addition, we propose a mixed Monte Carlo method, using the control variate and multilevel methods. The numerical experiments indicate that the proposed method is capable of achieving a substantial cost-adjusted variance reduction up to 17 times, and it is better than its predecessor individual methods in terms of cost-adjusted performance. Due to the cost-adjusted basis for our numerical experiment, the result also indicates a high possibility of potential use in practice.



2018 ◽  
Vol 10 (2) ◽  
pp. 10
Author(s):  
George Chang

We apply the Monte Carlo simulation algorithm developed by Broadie and Glasserman (1997) and the control variate technique first introduced to asset pricing via simulation by Boyle (1977) to examine the efficiency of American put option pricing via this combined method. The importance and effectiveness of variance reduction is clearly demonstrated in our simulation results. We also found that the control variates technique does not work as well for deep-in-the-money American put options. This is because deep-in-the-money American options are more likely to be exercised early, thus the value of the American options are less in line (or less correlated) with those of their European counterparts. the same FPESS can also be observed when investigators partition large datasets into smaller datasets to address a variety of auditing questions. In this study, we fill the empirical gap in the literature by investigating the sensitivity of the FPESS to partitioned datasets. We randomly selected 16 balance-sheet datasets from: China Stock Market Financial Statements Database™, that tested to be Benford Conforming noted as RBCD. We then explore how partitioning these datasets affects the FPESS by repeated randomly sampling: first 10% of the RBCD and then selecting 250 observations from the RBCD. This created two partitioned groups of 160 datasets each. The Statistical profile observed was: For the RBCD there were no indications of Non-Conformity; for the 10%-Sample there were no overall indications that Extended Procedures would be warranted; and for the 250-Sample there were a number of indications that the dataset was Non-Conforming. This demonstrated clearly that small datasets are indeed likely to create the FPESS. We offer a discussion of these results with implications for audits in the Big-Data context where the audit In-charge would find it necessary to partition the datasets of the client. 



2019 ◽  
Vol 22 (2) ◽  
pp. 258-263
Author(s):  
Tuan Duc Hoang ◽  
Tai Thanh Duong ◽  
Oanh Thi Luong ◽  
Loan Thi Hong Truong

Introduction: Monte Carlo (MC) is considered to be the most accurate method to calculate dose distribution in radiation therapy. However, the limitation of MC simulations is the long calculation time to reach the desired statistical uncertainty in the dose calculation as well as in clinical practice. To overcome the above limitations, Variance reduction techniques (VRTs) has developed and shorten the calculation time while maintaining accuracy. Therefore, the purpose of this study is the application of VRTs in code EGSnrc to find the optimal method for accelerator simulation and calculated dose distribution using MC method. Methods: The linear Accelerator HPD Siemens Primus at the General Hospital of Dong Nai had been simulated by using BEAMnrc code and several variance reduction techniques such as: range rejection, photon forcing, bremsstrahlung photon splitting (uniform, selective and direction)... These VRTs were used under the same set of input parameters as histories of 2x108, photon energy of 6 MV, structure, size and material of the phantom… The computational efficiency ε is calculated by the following equation ε = 1/T.σ2 where T is the CUP time of calculation and  σ2 is an estimate of the variance, for evaluating and selecting the VRT which gives the best computational efficiency. Results: The results showed a good agreement between the calculated dose and measured ones when applying different VRTs. These techniques were significantly reduced uncertainty in simulation compared the analog cases. Specifically, the efficiency of DBS and UBS improved by more than 90 times and 15 times compared with the analog instances, respectively. Rang rejection and photon forcing techniques also haveimproved the efficiency of simulation, but not significantly. Conclusions: The application of the VRTs for EGSnrc increase the efficiency of the simulation. VRTs is a powerful tool that should be applied for the simulation by code EGSnrc to improve calculation efficiency by reducing simulation time and its variance. Our results show that the direction bremsstrahlung splitting (DBS) gives thebest computational efficiency.  



2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Zineb El Filali Ech-Chafiq ◽  
Jérôme Lelong ◽  
Adil Reghai

Abstract Many pricing problems boil down to the computation of a high-dimensional integral, which is usually estimated using Monte Carlo. In fact, the accuracy of a Monte Carlo estimator with M simulations is given by σ M {\frac{\sigma}{\sqrt{M}}} . Meaning that its convergence is immune to the dimension of the problem. However, this convergence can be relatively slow depending on the variance σ of the function to be integrated. To resolve such a problem, one would perform some variance reduction techniques such as importance sampling, stratification, or control variates. In this paper, we will study two approaches for improving the convergence of Monte Carlo using Neural Networks. The first approach relies on the fact that many high-dimensional financial problems are of low effective dimensions. We expose a method to reduce the dimension of such problems in order to keep only the necessary variables. The integration can then be done using fast numerical integration techniques such as Gaussian quadrature. The second approach consists in building an automatic control variate using neural networks. We learn the function to be integrated (which incorporates the diffusion model plus the payoff function) in order to build a network that is highly correlated to it. As the network that we use can be integrated exactly, we can use it as a control variate.



2018 ◽  
Vol 482 (6) ◽  
pp. 627-630
Author(s):  
D. Belomestny ◽  
◽  
L. Iosipoi ◽  
N. Zhivotovskiy ◽  
◽  
...  






Sign in / Sign up

Export Citation Format

Share Document