scholarly journals Convergence in Hölder norms with applications to Monte Carlo methods in infinite dimensions

Author(s):  
Sonja Cox ◽  
Martin Hutzenthaler ◽  
Arnulf Jentzen ◽  
Jan van Neerven ◽  
Timo Welti

Abstract We show that if a sequence of piecewise affine linear processes converges in the strong sense with a positive rate to a stochastic process that is strongly Hölder continuous in time, then this sequence converges in the strong sense even with respect to much stronger Hölder norms and the convergence rate is essentially reduced by the Hölder exponent. Our first application hereof establishes pathwise convergence rates for spectral Galerkin approximations of stochastic partial differential equations. Our second application derives strong convergence rates of multilevel Monte Carlo approximations of expectations of Banach-space-valued stochastic processes.

2019 ◽  
Vol 374 (2) ◽  
pp. 823-871 ◽  
Author(s):  
Simon Becker ◽  
Nilanjana Datta

Abstract By extending the concept of energy-constrained diamond norms, we obtain continuity bounds on the dynamics of both closed and open quantum systems in infinite dimensions, which are stronger than previously known bounds. We extensively discuss applications of our theory to quantum speed limits, attenuator and amplifier channels, the quantum Boltzmann equation, and quantum Brownian motion. Next, we obtain explicit log-Lipschitz continuity bounds for entropies of infinite-dimensional quantum systems, and classical capacities of infinite-dimensional quantum channels under energy-constraints. These bounds are determined by the high energy spectrum of the underlying Hamiltonian and can be evaluated using Weyl’s law.


Author(s):  
Dong T.P. Nguyen ◽  
Dirk Nuyens

We introduce the \emph{multivariate decomposition finite element method} (MDFEM) for elliptic PDEs with lognormal diffusion coefficients, that is, when the diffusion coefficient has the form $a=\exp(Z)$ where $Z$ is a Gaussian random field defined by an infinite series expansion $Z(\bsy) = \sum_{j \ge 1} y_j \, \phi_j$ with $y_j \sim \calN(0,1)$ and a given sequence of functions $\{\phi_j\}_{j \ge 1}$. We use the MDFEM to approximate the expected value of a linear functional of the solution of the PDE which is an infinite-dimensional integral over the parameter space. The proposed algorithm uses the \emph{multivariate decomposition method} (MDM) to compute the infinite-dimensional integral by a decomposition into finite-dimensional integrals, which we resolve using \emph{quasi-Monte Carlo} (QMC) methods, and for which we use the \emph{finite element method} (FEM) to solve different instances of the PDE.   We develop higher-order quasi-Monte Carlo rules for integration over the finite-di\-men\-si\-onal Euclidean space with respect to the Gaussian distribution by use of a truncation strategy. By linear transformations of interlaced polynomial lattice rules from the unit cube to a multivariate box of the Euclidean space we achieve higher-order convergence rates for functions belonging to a class of \emph{anchored Gaussian Sobolev spaces} while taking into account the truncation error. These cubature rules are then used in the MDFEM algorithm.   Under appropriate conditions, the MDFEM achieves higher-order convergence rates in term of error versus cost, i.e., to achieve an accuracy of $O(\epsilon)$ the computational cost is $O(\epsilon^{-1/\lambda-\dd/\lambda}) = O(\epsilon^{-(p^* + \dd/\tau)/(1-p^*)})$ where $\epsilon^{-1/\lambda}$ and $\epsilon^{-\dd/\lambda}$ are respectively the cost of the quasi-Monte Carlo cubature and the finite element approximations, with $\dd = d \, (1+\ddelta)$ for some $\ddelta \ge 0$ and $d$ the physical dimension, and $0 < p^* \le (2 + \dd/\tau)^{-1}$ is a parameter representing the sparsity of $\{\phi_j\}_{j \ge 1}$.


2019 ◽  
Vol 11 (3) ◽  
pp. 815 ◽  
Author(s):  
Yijuan Liang ◽  
Xiuchuan Xu

Pricing multi-asset options has always been one of the key problems in financial engineering because of their high dimensionality and the low convergence rates of pricing algorithms. This paper studies a method to accelerate Monte Carlo (MC) simulations for pricing multi-asset options with stochastic volatilities. First, a conditional Monte Carlo (CMC) pricing formula is constructed to reduce the dimension and variance of the MC simulation. Then, an efficient martingale control variate (CV), based on the martingale representation theorem, is designed by selecting volatility parameters in the approximated option price for further variance reduction. Numerical tests illustrated the sensitivity of the CMC method to correlation coefficients and the effectiveness and robustness of our martingale CV method. The idea in this paper is also applicable for the valuation of other derivatives with stochastic volatility.


2014 ◽  
Vol 46 (04) ◽  
pp. 1059-1083 ◽  
Author(s):  
Qifan Song ◽  
Mingqi Wu ◽  
Faming Liang

In this paper we establish the theory of weak convergence (toward a normal distribution) for both single-chain and population stochastic approximation Markov chain Monte Carlo (MCMC) algorithms (SAMCMC algorithms). Based on the theory, we give an explicit ratio of convergence rates for the population SAMCMC algorithm and the single-chain SAMCMC algorithm. Our results provide a theoretic guarantee that the population SAMCMC algorithms are asymptotically more efficient than the single-chain SAMCMC algorithms when the gain factor sequence decreases slower than O(1 / t), where t indexes the number of iterations. This is of interest for practical applications.


2020 ◽  
Vol 30 (6) ◽  
pp. 1645-1663
Author(s):  
Ömer Deniz Akyildiz ◽  
Dan Crisan ◽  
Joaquín Míguez

Abstract We introduce and analyze a parallel sequential Monte Carlo methodology for the numerical solution of optimization problems that involve the minimization of a cost function that consists of the sum of many individual components. The proposed scheme is a stochastic zeroth-order optimization algorithm which demands only the capability to evaluate small subsets of components of the cost function. It can be depicted as a bank of samplers that generate particle approximations of several sequences of probability measures. These measures are constructed in such a way that they have associated probability density functions whose global maxima coincide with the global minima of the original cost function. The algorithm selects the best performing sampler and uses it to approximate a global minimum of the cost function. We prove analytically that the resulting estimator converges to a global minimum of the cost function almost surely and provide explicit convergence rates in terms of the number of generated Monte Carlo samples and the dimension of the search space. We show, by way of numerical examples, that the algorithm can tackle cost functions with multiple minima or with broad “flat” regions which are hard to minimize using gradient-based techniques.


2019 ◽  
Vol 622 ◽  
pp. A79 ◽  
Author(s):  
Mika Juvela

Context. Thermal dust emission carries information on physical conditions and dust properties in many astronomical sources. Because observations represent a sum of emission along the line of sight, their interpretation often requires radiative transfer (RT) modelling. Aims. We describe a new RT program, SOC, for computations of dust emission, and examine its performance in simulations of interstellar clouds with external and internal heating. Methods. SOC implements the Monte Carlo RT method as a parallel program for shared-memory computers. It can be used to study dust extinction, scattering, and emission. We tested SOC with realistic cloud models and examined the convergence and noise of the dust-temperature estimates and of the resulting surface-brightness maps. Results. SOC has been demonstrated to produce accurate estimates for dust scattering and for thermal dust emission. It performs well with both CPUs and GPUs, the latter providing a speed-up of processing time by up to an order of magnitude. In the test cases, accelerated lambda iterations (ALIs) improved the convergence rates but was also sensitive to Monte Carlo noise. Run-time refinement of the hierarchical-grid models did not help in reducing the run times required for a given accuracy of solution. The use of a reference field, without ALI, works more robustly, and also allows the run time to be optimised if the number of photon packages is increased only as the iterations progress. Conclusions. The use of GPUs in RT computations should be investigated further.


Sign in / Sign up

Export Citation Format

Share Document