scholarly journals Sub-sample swapping for sequential Monte Carlo approximation of high-dimensional densities in the context of complex object tracking

2013 ◽  
Vol 54 (7) ◽  
pp. 934-953 ◽  
Author(s):  
Séverine Dubuisson ◽  
Christophe Gonzales ◽  
Xuan Son Nguyen
2019 ◽  
Vol 67 (16) ◽  
pp. 4177-4188 ◽  
Author(s):  
Christian A. Naesseth ◽  
Fredrik Lindsten ◽  
Thomas B. Schon

2014 ◽  
Vol 25 ◽  
pp. 1-16 ◽  
Author(s):  
Lyudmila Mihaylova ◽  
Avishy Y. Carmi ◽  
François Septier ◽  
Amadou Gning ◽  
Sze Kim Pang ◽  
...  

2020 ◽  
Author(s):  
Sangeetika Ruchi ◽  
Svetlana Dubinkina ◽  
Jana de Wiljes

Abstract. Identification of unknown parameters on the basis of partial and noisy data is a challenging task in particular in high dimensional and nonlinear settings. Gaussian approximations to the problem, such as ensemble Kalman inversion, tend to be robust, computationally cheap and often produce astonishingly accurate estimations despite the inherently wrong underlying assumptions. Yet there is a lot of room for improvement specifically regarding the description of the associated statistics. The tempered ensemble transform particle filter is an adaptive sequential Monte Carlo method, where resampling is based on optimal transport mapping. Unlike ensemble Kalman inversion it does not require any assumptions regarding the posterior distribution and hence has shown to provide promising results for non-linear non-Gaussian inverse problems. However, the improved accuracy comes with the price of much higher computational complexity and the method is not as robust as the ensemble Kalman inversion in high dimensional problems. In this work, we add an entropy inspired regularisation factor to the underlying optimal transport problem that allows to considerably reduce the high computational cost via Sinkhorn iterations. Further, the robustness of the method is increased via an ensemble Kalman inversion proposal step before each update of the samples, which is also referred to as hybrid approach. The promising performance of the introduced method is numerically verified by testing it on a steady-state single-phase Darcy flow model with two different permeability configurations. The results are compared to the output of ensemble Kalman inversion, and Markov Chain Monte Carlo methods results are computed as a benchmark.


2021 ◽  
Vol 28 (1) ◽  
pp. 23-41
Author(s):  
Sangeetika Ruchi ◽  
Svetlana Dubinkina ◽  
Jana de Wiljes

Abstract. Identification of unknown parameters on the basis of partial and noisy data is a challenging task, in particular in high dimensional and non-linear settings. Gaussian approximations to the problem, such as ensemble Kalman inversion, tend to be robust and computationally cheap and often produce astonishingly accurate estimations despite the simplifying underlying assumptions. Yet there is a lot of room for improvement, specifically regarding a correct approximation of a non-Gaussian posterior distribution. The tempered ensemble transform particle filter is an adaptive Sequential Monte Carlo (SMC) method, whereby resampling is based on optimal transport mapping. Unlike ensemble Kalman inversion, it does not require any assumptions regarding the posterior distribution and hence has shown to provide promising results for non-linear non-Gaussian inverse problems. However, the improved accuracy comes with the price of much higher computational complexity, and the method is not as robust as ensemble Kalman inversion in high dimensional problems. In this work, we add an entropy-inspired regularisation factor to the underlying optimal transport problem that allows the high computational cost to be considerably reduced via Sinkhorn iterations. Further, the robustness of the method is increased via an ensemble Kalman inversion proposal step before each update of the samples, which is also referred to as a hybrid approach. The promising performance of the introduced method is numerically verified by testing it on a steady-state single-phase Darcy flow model with two different permeability configurations. The results are compared to the output of ensemble Kalman inversion, and Markov chain Monte Carlo methods results are computed as a benchmark.


2014 ◽  
Vol 46 (1) ◽  
pp. 279-306 ◽  
Author(s):  
Alexandros Beskos ◽  
Dan O. Crisan ◽  
Ajay Jasra ◽  
Nick Whiteley

In this paper we develop a collection of results associated to the analysis of the sequential Monte Carlo (SMC) samplers algorithm, in the context of high-dimensional independent and identically distributed target probabilities. The SMC samplers algorithm can be designed to sample from a single probability distribution, using Monte Carlo to approximate expectations with respect to this law. Given a target density in d dimensions our results are concerned with d → ∞, while the number of Monte Carlo samples, N, remains fixed. We deduce an explicit bound on the Monte-Carlo error for estimates derived using the SMC sampler and the exact asymptotic relative -error of the estimate of the normalising constant associated to the target. We also establish marginal propagation of chaos properties of the algorithm. These results are deduced when the cost of the algorithm is O(Nd2).


2014 ◽  
Vol 46 (01) ◽  
pp. 279-306 ◽  
Author(s):  
Alexandros Beskos ◽  
Dan O. Crisan ◽  
Ajay Jasra ◽  
Nick Whiteley

In this paper we develop a collection of results associated to the analysis of the sequential Monte Carlo (SMC) samplers algorithm, in the context of high-dimensional independent and identically distributed target probabilities. The SMC samplers algorithm can be designed to sample from a single probability distribution, using Monte Carlo to approximate expectations with respect to this law. Given a target density inddimensions our results are concerned withd→ ∞, while the number of Monte Carlo samples,N, remains fixed. We deduce an explicit bound on the Monte-Carlo error for estimates derived using the SMC sampler and the exact asymptotic relative-error of the estimate of the normalising constant associated to the target. We also establish marginal propagation of chaos properties of the algorithm. These results are deduced when the cost of the algorithm isO(Nd2).


Sign in / Sign up

Export Citation Format

Share Document