scholarly journals Quasi-Monte Carlo tractability of high dimensional integration over products of simplices

2015 ◽  
Vol 31 (6) ◽  
pp. 817-834 ◽  
Author(s):  
Kinjal Basu
2014 ◽  
Vol 20 (3) ◽  
Author(s):  
Ilya M. Sobol ◽  
Boris V. Shukhman

2019 ◽  
Vol 25 (3) ◽  
pp. 187-207
Author(s):  
Manal Bayousef ◽  
Michael Mascagni

Abstract We propose the use of randomized (scrambled) quasirandom sequences for the purpose of providing practical error estimates for quasi-Monte Carlo (QMC) applications. One popular quasirandom sequence among practitioners is the Halton sequence. However, Halton subsequences have correlation problems in their highest dimensions, and so using this sequence for high-dimensional integrals dramatically affects the accuracy of QMC. Consequently, QMC studies have previously proposed several scrambling methods; however, to varying degrees, scrambled versions of Halton sequences still suffer from the correlation problem as manifested in two-dimensional projections. This paper proposes a modified Halton sequence (MHalton), created using a linear digital scrambling method, which finds the optimal multiplier for the Halton sequence in the linear scrambling space. In order to generate better uniformity of distributed sequences, we have chosen strong MHalton multipliers up to 360 dimensions. The proposed multipliers have been tested and proved to be stronger than several sets of multipliers used in other known scrambling methods. To compare the quality of our proposed scrambled MHalton sequences with others, we have performed several extensive computational tests that use {L_{2}} -discrepancy and high-dimensional integration tests. Moreover, we have tested MHalton sequences on Mortgage-backed security (MBS), which is one of the most widely used applications in finance. We have tested our proposed MHalton sequence numerically and empirically, and they show optimal results in QMC applications. These confirm the efficiency and safety of our proposed MHalton over scrambling sequences previously used in QMC applications.


2017 ◽  
Vol 27 (05) ◽  
pp. 953-995 ◽  
Author(s):  
Josef Dick ◽  
Robert N. Gantner ◽  
Quoc T. Le Gia ◽  
Christoph Schwab

We propose and analyze deterministic multilevel (ML) approximations for Bayesian inversion of operator equations with uncertain distributed parameters, subject to additive Gaussian measurement data. The algorithms use a ML approach based on deterministic, higher-order quasi-Monte Carlo (HoQMC) quadrature for approximating the high-dimensional expectations, which arise in the Bayesian estimators, and a Petrov–Galerkin (PG) method for approximating the solution to the underlying partial differential equation (PDE). This extends the previous single-level (SL) approach from [J. Dick, R. N. Gantner, Q. T. Le Gia and Ch. Schwab, Higher order quasi-Monte Carlo integration for Bayesian estimation, Report 2016-13, Seminar for Applied Mathematics, ETH Zürich (in review)]. Compared to the SL approach, the present convergence analysis of the ML method requires stronger assumptions on holomorphy and regularity of the countably-parametric uncertainty-to-observation maps of the forward problem. As in the SL case and in the affine-parametric case analyzed in [J. Dick, F. Y. Kuo, Q. T. Le Gia and Ch. Schwab, Multi-level higher order QMC Galerkin discretization for affine parametric operator equations, SIAM J. Numer. Anal. 54 (2016) 2541–2568], we obtain sufficient conditions which allow us to achieve arbitrarily high, algebraic convergence rates in terms of work, which are independent of the dimension of the parameter space. The convergence rates are limited only by the spatial regularity of the forward problem, the discretization order achieved by the PG discretization, and by the sparsity of the uncertainty parametrization. We provide detailed numerical experiments for linear elliptic problems in two space dimensions, with [Formula: see text] parameters characterizing the uncertain input, confirming the theory and showing that the ML HoQMC algorithms can outperform, in terms of error versus computational work, both multilevel Monte Carlo (MLMC) methods and SL HoQMC methods, provided the parametric solution maps of the forward problems afford sufficient smoothness and sparsity of the high-dimensional parameter spaces.


Sign in / Sign up

Export Citation Format

Share Document