control variate
Recently Published Documents


TOTAL DOCUMENTS

120
(FIVE YEARS 36)

H-INDEX

14
(FIVE YEARS 4)

2021 ◽  
Author(s):  
Xiaoyu Wang ◽  
Lei Hou ◽  
Xueyu Geng ◽  
Peibin Gong ◽  
Honglei Liu

The characterization of the proppant transport at a field-engineering scale is still challenging due to the lack of direct subsurface measurements. Features that control the proppant transport may link the experimental and numerical observations to the practical operations at a field scale. To improve the numerical and laboratory simulations, we propose a machine-learning-based workflow to evaluate the essential features of proppant transport and their corresponding calculations. The proppant flow in fractures is estimated by applying the Gated recurrent unit (GRU) and Support-vector machine (SVM) algorithms to the measurements obtained from shale gas fracturing operations. Over 430,000 groups of fracturing data are collected and pre-processed by the proppant transport models to calculate key features, including settlement, stratified flow and inception of settled particles. The features are then fed into machine learning algorithms for pressure prediction. The root mean squared error (RMSE) is used as the criterion for ranking selected features via the control variate method. Our result shows that the stratified-flow feature (fracture-level) possesses better interpretations for the proppant transport, in which the Bi-power model helps to produce the best predictions. The settlement and inception features (particle-level) perform better in cases that the pressure fluctuates significantly, indicating that more complex fractures may have been generated. Moreover, our analyses on the remaining errors in the pressure-ascending cases suggest that (1) an introduction of the alternate-injection process, and (2) the improved calculation of proppant transport in complex fracture networks and highly-filled fractures will be beneficial to both experimental observations and field applications.


2021 ◽  
Author(s):  
Georgios Zervakis ◽  
Ourania Spantidi ◽  
Iraklis Anagnostopoulos ◽  
Hussam Amrouch ◽  
Jorg Henkel
Keyword(s):  

Author(s):  
Leah F. South ◽  
Marina Riabiz ◽  
Onur Teymur ◽  
Chris J. Oates

Markov chain Monte Carlo is the engine of modern Bayesian statistics, being used to approximate the posterior and derived quantities of interest. Despite this, the issue of how the output from a Markov chain is postprocessed and reported is often overlooked. Convergence diagnostics can be used to control bias via burn-in removal, but these do not account for (common) situations where a limited computational budget engenders a bias-variance trade-off. The aim of this article is to review state-of-the-art techniques for postprocessing Markov chain output. Our review covers methods based on discrepancy minimization, which directly address the bias-variance trade-off, as well as general-purpose control variate methods for approximating expected quantities of interest. Expected final online publication date for the Annual Review of Statistics and Its Application, Volume 9 is March 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


Mathematics ◽  
2021 ◽  
Vol 9 (22) ◽  
pp. 2930
Author(s):  
Siow Woon Jeng ◽  
Adem Kiliçman

The rough Heston model is a form of a stochastic Volterra equation, which was proposed to model stock price volatility. It captures some important qualities that can be observed in the financial market—highly endogenous, statistical arbitrages prevention, liquidity asymmetry, and metaorders. Unlike stochastic differential equation, the stochastic Volterra equation is extremely computationally expensive to simulate. In other words, it is difficult to compute option prices under the rough Heston model by conventional Monte Carlo simulation. In this paper, we prove that Euler’s discretization method for the stochastic Volterra equation with non-Lipschitz diffusion coefficient error[|Vt−Vtn|p] is finitely bounded by an exponential function of t. Furthermore, the weak error |error[Vt−Vtn]| and convergence for the stochastic Volterra equation are proven at the rate of O(n−H). In addition, we propose a mixed Monte Carlo method, using the control variate and multilevel methods. The numerical experiments indicate that the proposed method is capable of achieving a substantial cost-adjusted variance reduction up to 17 times, and it is better than its predecessor individual methods in terms of cost-adjusted performance. Due to the cost-adjusted basis for our numerical experiment, the result also indicates a high possibility of potential use in practice.


2021 ◽  
Vol 40 (3) ◽  
pp. 1-15
Author(s):  
Miguel Crespo ◽  
Adrian Jarabo ◽  
Adolfo Muñoz

We present an unbiased numerical integration algorithm that handles both low-frequency regions and high-frequency details of multidimensional integrals. It combines quadrature and Monte Carlo integration by using a quadrature-based approximation as a control variate of the signal. We adaptively build the control variate constructed as a piecewise polynomial, which can be analytically integrated, and accurately reconstructs the low-frequency regions of the integrand. We then recover the high-frequency details missed by the control variate by using Monte Carlo integration of the residual. Our work leverages importance sampling techniques by working in primary space, allowing the combination of multiple mappings; this enables multiple importance sampling in quadrature-based integration. Our algorithm is generic and can be applied to any complex multidimensional integral. We demonstrate its effectiveness with four applications with low dimensionality: transmittance estimation in heterogeneous participating media, low-order scattering in homogeneous media, direct illumination computation, and rendering of distribution effects. Finally, we show how our technique is extensible to integrands of higher dimensionality by computing the control variate on Monte Carlo estimates of the high-dimensional signal, and accounting for such additional dimensionality on the residual as well. In all cases, we show accurate results and faster convergence compared to previous approaches.


2021 ◽  
Vol 31 (4) ◽  
Author(s):  
Rémi Leluc ◽  
François Portier ◽  
Johan Segers

2021 ◽  
Vol 31 (1) ◽  
pp. 1-26
Author(s):  
Mingbin Feng ◽  
Jeremy Staum

In a setting in which experiments are performed repeatedly with the same simulation model, green simulation means reusing outputs from previous experiments to answer the question currently being asked of the model. In this article, we address the setting in which experiments are run to answer questions quickly, with a time limit providing a fixed computational budget, and then idle time is available for further experimentation before the next question is asked. The general strategy is database Monte Carlo for green simulation: the output of experiments is stored in a database and used to improve the computational efficiency of future experiments. In this article, the database provides a quasi-control variate, which reduces the variance of the estimated mean response in a future experiment that has a fixed computational budget. We propose a particular green simulation procedure using quasi-control variates, addressing practical issues such as experiment design, and analyze its theoretical properties. We show that, under some conditions, the variance of the estimated mean response in an experiment with a fixed computational budget drops to zero over a sequence of repeated experiments, as more and more idle time is invested in creating databases. Our numerical experiments on the procedure show that using idle time to create databases of simulation output provides variance reduction immediately, and that the variance reduction grows over time in a way that is consistent with the convergence analysis.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Zineb El Filali Ech-Chafiq ◽  
Jérôme Lelong ◽  
Adil Reghai

Abstract Many pricing problems boil down to the computation of a high-dimensional integral, which is usually estimated using Monte Carlo. In fact, the accuracy of a Monte Carlo estimator with M simulations is given by σ M {\frac{\sigma}{\sqrt{M}}} . Meaning that its convergence is immune to the dimension of the problem. However, this convergence can be relatively slow depending on the variance σ of the function to be integrated. To resolve such a problem, one would perform some variance reduction techniques such as importance sampling, stratification, or control variates. In this paper, we will study two approaches for improving the convergence of Monte Carlo using Neural Networks. The first approach relies on the fact that many high-dimensional financial problems are of low effective dimensions. We expose a method to reduce the dimension of such problems in order to keep only the necessary variables. The integration can then be done using fast numerical integration techniques such as Gaussian quadrature. The second approach consists in building an automatic control variate using neural networks. We learn the function to be integrated (which incorporates the diffusion model plus the payoff function) in order to build a network that is highly correlated to it. As the network that we use can be integrated exactly, we can use it as a control variate.


2021 ◽  
Vol 43 (3) ◽  
pp. A2268-A2294
Author(s):  
Jamie Fox ◽  
Giray Ökten

Sign in / Sign up

Export Citation Format

Share Document