scholarly journals Error Bounds and Normalising Constants for Sequential Monte Carlo Samplers in High Dimensions

2014 ◽  
Vol 46 (01) ◽  
pp. 279-306 ◽  
Author(s):  
Alexandros Beskos ◽  
Dan O. Crisan ◽  
Ajay Jasra ◽  
Nick Whiteley

In this paper we develop a collection of results associated to the analysis of the sequential Monte Carlo (SMC) samplers algorithm, in the context of high-dimensional independent and identically distributed target probabilities. The SMC samplers algorithm can be designed to sample from a single probability distribution, using Monte Carlo to approximate expectations with respect to this law. Given a target density inddimensions our results are concerned withd→ ∞, while the number of Monte Carlo samples,N, remains fixed. We deduce an explicit bound on the Monte-Carlo error for estimates derived using the SMC sampler and the exact asymptotic relative-error of the estimate of the normalising constant associated to the target. We also establish marginal propagation of chaos properties of the algorithm. These results are deduced when the cost of the algorithm isO(Nd2).

2014 ◽  
Vol 46 (1) ◽  
pp. 279-306 ◽  
Author(s):  
Alexandros Beskos ◽  
Dan O. Crisan ◽  
Ajay Jasra ◽  
Nick Whiteley

In this paper we develop a collection of results associated to the analysis of the sequential Monte Carlo (SMC) samplers algorithm, in the context of high-dimensional independent and identically distributed target probabilities. The SMC samplers algorithm can be designed to sample from a single probability distribution, using Monte Carlo to approximate expectations with respect to this law. Given a target density in d dimensions our results are concerned with d → ∞, while the number of Monte Carlo samples, N, remains fixed. We deduce an explicit bound on the Monte-Carlo error for estimates derived using the SMC sampler and the exact asymptotic relative -error of the estimate of the normalising constant associated to the target. We also establish marginal propagation of chaos properties of the algorithm. These results are deduced when the cost of the algorithm is O(Nd2).


2017 ◽  
Vol 49 (1) ◽  
pp. 24-48 ◽  
Author(s):  
Alexandros Beskos ◽  
Dan Crisan ◽  
Ajay Jasra ◽  
Kengo Kamatani ◽  
Yan Zhou

Abstract We consider the numerical approximation of the filtering problem in high dimensions, that is, when the hidden state lies in ℝd with large d. For low-dimensional problems, one of the most popular numerical procedures for consistent inference is the class of approximations termed particle filters or sequential Monte Carlo methods. However, in high dimensions, standard particle filters (e.g. the bootstrap particle filter) can have a cost that is exponential in d for the algorithm to be stable in an appropriate sense. We develop a new particle filter, called the space‒time particle filter, for a specific family of state-space models in discrete time. This new class of particle filters provides consistent Monte Carlo estimates for any fixed d, as do standard particle filters. Moreover, when there is a spatial mixing element in the dimension of the state vector, the space‒time particle filter will scale much better with d than the standard filter for a class of filtering problems. We illustrate this analytically for a model of a simple independent and identically distributed structure and a model of an L-Markovian structure (L≥ 1, L independent of d) in the d-dimensional space direction, when we show that the algorithm exhibits certain stability properties as d increases at a cost 𝒪(nNd2), where n is the time parameter and N is the number of Monte Carlo samples, which are fixed and independent of d. Our theoretical results are also supported by numerical simulations on practical models of complex structures. The results suggest that it is indeed possible to tackle some high-dimensional filtering problems using the space‒time particle filter that standard particle filters cannot handle.


Author(s):  
James P. Sethna

Statistical mechanics explains the comprehensible behavior of microscopically complex systems by using the weird geometry of high-dimensional spaces, and by relying only on the known conserved quantity: the energy. Particle velocities and density fluctuations are determined by the geometry of spheres and cubes in dimensions with twenty three digits. Temperature, pressure, and chemical potential are defined and derived in terms of the volume of the high-dimensional energy shell, as quantified by the entropy. In particular, temperature is the inverse of the cost of buying energy from the rest of the world, and entropy is the currency being paid. Exercises discuss the weird geometry of high dimensions, how taste and smell measure chemical potentials, equilibrium fluctuations, and classic thermodynamic relations.


2019 ◽  
Vol 67 (16) ◽  
pp. 4177-4188 ◽  
Author(s):  
Christian A. Naesseth ◽  
Fredrik Lindsten ◽  
Thomas B. Schon

2020 ◽  
Vol 30 (6) ◽  
pp. 1645-1663
Author(s):  
Ömer Deniz Akyildiz ◽  
Dan Crisan ◽  
Joaquín Míguez

Abstract We introduce and analyze a parallel sequential Monte Carlo methodology for the numerical solution of optimization problems that involve the minimization of a cost function that consists of the sum of many individual components. The proposed scheme is a stochastic zeroth-order optimization algorithm which demands only the capability to evaluate small subsets of components of the cost function. It can be depicted as a bank of samplers that generate particle approximations of several sequences of probability measures. These measures are constructed in such a way that they have associated probability density functions whose global maxima coincide with the global minima of the original cost function. The algorithm selects the best performing sampler and uses it to approximate a global minimum of the cost function. We prove analytically that the resulting estimator converges to a global minimum of the cost function almost surely and provide explicit convergence rates in terms of the number of generated Monte Carlo samples and the dimension of the search space. We show, by way of numerical examples, that the algorithm can tackle cost functions with multiple minima or with broad “flat” regions which are hard to minimize using gradient-based techniques.


2020 ◽  
Vol 35 (24) ◽  
pp. 1950142
Author(s):  
Allen Caldwell ◽  
Philipp Eller ◽  
Vasyl Hafych ◽  
Rafael Schick ◽  
Oliver Schulz ◽  
...  

Numerically estimating the integral of functions in high dimensional spaces is a nontrivial task. A oft-encountered example is the calculation of the marginal likelihood in Bayesian inference, in a context where a sampling algorithm such as a Markov Chain Monte Carlo provides samples of the function. We present an Adaptive Harmonic Mean Integration (AHMI) algorithm. Given samples drawn according to a probability distribution proportional to the function, the algorithm will estimate the integral of the function and the uncertainty of the estimate by applying a harmonic mean estimator to adaptively chosen regions of the parameter space. We describe the algorithm and its mathematical properties, and report the results using it on multiple test cases.


2020 ◽  
Author(s):  
Sangeetika Ruchi ◽  
Svetlana Dubinkina ◽  
Jana de Wiljes

Abstract. Identification of unknown parameters on the basis of partial and noisy data is a challenging task in particular in high dimensional and nonlinear settings. Gaussian approximations to the problem, such as ensemble Kalman inversion, tend to be robust, computationally cheap and often produce astonishingly accurate estimations despite the inherently wrong underlying assumptions. Yet there is a lot of room for improvement specifically regarding the description of the associated statistics. The tempered ensemble transform particle filter is an adaptive sequential Monte Carlo method, where resampling is based on optimal transport mapping. Unlike ensemble Kalman inversion it does not require any assumptions regarding the posterior distribution and hence has shown to provide promising results for non-linear non-Gaussian inverse problems. However, the improved accuracy comes with the price of much higher computational complexity and the method is not as robust as the ensemble Kalman inversion in high dimensional problems. In this work, we add an entropy inspired regularisation factor to the underlying optimal transport problem that allows to considerably reduce the high computational cost via Sinkhorn iterations. Further, the robustness of the method is increased via an ensemble Kalman inversion proposal step before each update of the samples, which is also referred to as hybrid approach. The promising performance of the introduced method is numerically verified by testing it on a steady-state single-phase Darcy flow model with two different permeability configurations. The results are compared to the output of ensemble Kalman inversion, and Markov Chain Monte Carlo methods results are computed as a benchmark.


Sign in / Sign up

Export Citation Format

Share Document