scholarly journals Automatic Tolerance Selection for Approximate Bayesian Computation

Author(s):  
George Karabatsos

Abstract Approximate Bayesian Computation (ABC) can provide inferences from the (approximate) posterior distribution based on intractable likelihoods. The quality of ABC inferences relies on the choice of tolerance for the distance between the observed data summary statistics, and the pseudo-data summary statistics simulated from the likelihood, used within the context of an algorithm which samples from the approximate posterior. However, the ABC literature does not provide an automatic method to select the best tolerance level for the given dataset at hand, and in ABC practice finding the best tolerance level can be time consuming. This note introduces a fast automatic estimator of the tolerance, based on the parametric bootstrap. After the tolerance estimate is calculated, it can then be input into any suitable importance sampling or MCMC algorithm to approximate from the target approximate posterior distribution. This tolerance estimator is illustrated through ABC analyses of simulated and real datasets involving several intractable likelihood models. This includes the analysis of a real 23,000-node network dataset involving stochastic search model selection.


Biometrika ◽  
2020 ◽  
Author(s):  
Grégoire Clarté ◽  
Christian P Robert ◽  
Robin J Ryder ◽  
Julien Stoehr

Abstract Approximate Bayesian computation methods are useful for generative models with intractable likelihoods. These methods are however sensitive to the dimension of the parameter space, requiring exponentially increasing resources as this dimension grows. To tackle this difficulty, we explore a Gibbs version of the Approximate Bayesian computation approach that runs component-wise approximate Bayesian computation steps aimed at the corresponding conditional posterior distributions, and based on summary statistics of reduced dimensions. While lacking the standard justifications for the Gibbs sampler, the resulting Markov chain is shown to converge in distribution under some partial independence conditions. The associated stationary distribution can further be shown to be close to the true posterior distribution and some hierarchical versions of the proposed mechanism enjoy a closed form limiting distribution. Experiments also demonstrate the gain in efficiency brought by the Gibbs version over the standard solution.



2013 ◽  
Vol 2013 ◽  
pp. 1-10 ◽  
Author(s):  
Tom Burr ◽  
Alexei Skurikhin

Approximate Bayesian computation (ABC) is an approach for using measurement data to calibrate stochastic computer models, which are common in biology applications. ABC is becoming the “go-to” option when the data and/or parameter dimension is large because it relies on user-chosen summary statistics rather than the full data and is therefore computationally feasible. One technical challenge with ABC is that the quality of the approximation to the posterior distribution of model parameters depends on the user-chosen summary statistics. In this paper, the user requirement to choose effective summary statistics in order to accurately estimate the posterior distribution of model parameters is investigated and illustrated by example, using a model and corresponding real data of mitochondrial DNA population dynamics. We show that for some choices of summary statistics, the posterior distribution of model parameters is closely approximated and for other choices of summary statistics, the posterior distribution is not closely approximated. A strategy to choose effective summary statistics is suggested in cases where the stochastic computer model can be run at many trial parameter settings, as in the example.



Author(s):  
Cecilia Viscardi ◽  
Michele Boreale ◽  
Fabio Corradi

AbstractWe consider the problem of sample degeneracy in Approximate Bayesian Computation. It arises when proposed values of the parameters, once given as input to the generative model, rarely lead to simulations resembling the observed data and are hence discarded. Such “poor” parameter proposals do not contribute at all to the representation of the parameter’s posterior distribution. This leads to a very large number of required simulations and/or a waste of computational resources, as well as to distortions in the computed posterior distribution. To mitigate this problem, we propose an algorithm, referred to as the Large Deviations Weighted Approximate Bayesian Computation algorithm, where, via Sanov’s Theorem, strictly positive weights are computed for all proposed parameters, thus avoiding the rejection step altogether. In order to derive a computable asymptotic approximation from Sanov’s result, we adopt the information theoretic “method of types” formulation of the method of Large Deviations, thus restricting our attention to models for i.i.d. discrete random variables. Finally, we experimentally evaluate our method through a proof-of-concept implementation.



Author(s):  
Yang Zeng

Abstract Due to the flexibility and feasibility of addressing ill-posed problems, the Bayesian method has been widely used in inverse heat conduction problems (IHCPs). However, in the real science and engineering IHCPs, the likelihood function of the Bayesian method is commonly computationally expensive or analytically unavailable. In this study, in order to circumvent this intractable likelihood function, the approximate Bayesian computation (ABC) is expanded to the IHCPs. In ABC, the high dimensional observations in the intractable likelihood function are equalized by their low dimensional summary statistics. Thus, the performance of the ABC depends on the selection of summary statistics. In this study, a machine learning-based ABC (ML-ABC) is proposed to address the complicated selections of the summary statistics. The Auto-Encoder (AE) is a powerful Machine Learning (ML) framework which can compress the observations into very low dimensional summary statistics with little information loss. In addition, in order to accelerate the calculation of the proposed framework, another neural network (NN) is utilized to construct the mapping between the unknowns and the summary statistics. With this mapping, given arbitrary unknowns, the summary statistics can be obtained efficiently without solving the time-consuming forward problem with numerical method. Furthermore, an adaptive nested sampling method (ANSM) is developed to further improve the efficiency of sampling. The performance of the proposed method is demonstrated with two IHCP cases.





Author(s):  
Hsuan Jung ◽  
Paul Marjoram

In this paper, we develop a Genetic Algorithm that can address the fundamental problem of how one should weight the summary statistics included in an approximate Bayesian computation analysis built around an accept/reject algorithm, and how one might choose the tolerance for that analysis. We then demonstrate that using weighted statistics, and a well-chosen tolerance, in such an approximate Bayesian computation approach can result in improved performance, when compared to unweighted analyses, using one example drawn purely from statistics and two drawn from the estimation of population genetics parameters.



2016 ◽  
Vol 43 (12) ◽  
pp. 2191-2202 ◽  
Author(s):  
Muhammad Faisal ◽  
Andreas Futschik ◽  
Ijaz Hussain ◽  
Mitwali Abd-el.Moemen


Sign in / Sign up

Export Citation Format

Share Document