bayesian optimal design
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 2)

H-INDEX

9
(FIVE YEARS 0)

2021 ◽  
pp. 1-5
Author(s):  
Xianliang Gong ◽  
Yulin Pan

Abstract The authors of the discussed paper simplified the information-based acquisition on estimating statistical expectation and developed analytical computation for each involved quantity under uniform input distribution. In this discussion, we show that (1) the last three terms of the acquisition always add up to zero, leaving a concise form with a much more intuitive interpretation of the acquisition; (2) the analytical computation of the acquisition can be generalized to arbitrary input distribution, greatly broadening the application of the developed framework.





Entropy ◽  
2020 ◽  
Vol 22 (2) ◽  
pp. 258
Author(s):  
Zhihang Xu ◽  
Qifeng Liao

Optimal experimental design (OED) is of great significance in efficient Bayesian inversion. A popular choice of OED methods is based on maximizing the expected information gain (EIG), where expensive likelihood functions are typically involved. To reduce the computational cost, in this work, a novel double-loop Bayesian Monte Carlo (DLBMC) method is developed to efficiently compute the EIG, and a Bayesian optimization (BO) strategy is proposed to obtain its maximizer only using a small number of samples. For Bayesian Monte Carlo posed on uniform and normal distributions, our analysis provides explicit expressions for the mean estimates and the bounds of their variances. The accuracy and the efficiency of our DLBMC and BO based optimal design are validated and demonstrated with numerical experiments.





Biometrika ◽  
2019 ◽  
Vol 106 (3) ◽  
pp. 665-682
Author(s):  
K Alhorn ◽  
K Schorning ◽  
H Dette

SummaryWe consider the problem of designing experiments for estimating a target parameter in regression analysis when there is uncertainty about the parametric form of the regression function. A new optimality criterion is proposed that chooses the experimental design to minimize the asymptotic mean squared error of the frequentist model averaging estimate. Necessary conditions for the optimal solution of a locally and Bayesian optimal design problem are established. The results are illustrated in several examples, and it is demonstrated that Bayesian optimal designs can yield a reduction of the mean squared error of the model averaging estimator by up to 45%.



2019 ◽  
Vol 141 (10) ◽  
Author(s):  
Piyush Pandita ◽  
Ilias Bilionis ◽  
Jitesh Panchal

Abstract Bayesian optimal design of experiments (BODEs) have been successful in acquiring information about a quantity of interest (QoI) which depends on a black-box function. BODE is characterized by sequentially querying the function at specific designs selected by an infill-sampling criterion. However, most current BODE methods operate in specific contexts like optimization, or learning a universal representation of the black-box function. The objective of this paper is to design a BODE for estimating the statistical expectation of a physical response surface. This QoI is omnipresent in uncertainty propagation and design under uncertainty problems. Our hypothesis is that an optimal BODE should be maximizing the expected information gain in the QoI. We represent the information gain from a hypothetical experiment as the Kullback–Liebler (KL) divergence between the prior and the posterior probability distributions of the QoI. The prior distribution of the QoI is conditioned on the observed data, and the posterior distribution of the QoI is conditioned on the observed data and a hypothetical experiment. The main contribution of this paper is the derivation of a semi-analytic mathematical formula for the expected information gain about the statistical expectation of a physical response. The developed BODE is validated on synthetic functions with varying number of input-dimensions. We demonstrate the performance of the methodology on a steel wire manufacturing problem.



Author(s):  
Piyush Pandita ◽  
Ilias Bilionis ◽  
Jitesh Panchal

Acquiring information about noisy expensive black-box functions (computer simulations or physical experiments) is a tremendously challenging problem. Finite computational and financial resources restrict the application of traditional methods for design of experiments. The problem is surmounted by hurdles such as numerical errors and stochastic approximations errors, when the quantity of interest (QoI) in a problem depends on an expensive black-box function. Bayesian optimal design of experiments has been reasonably successful in guiding the designer towards the QoI for problems of the above kind. This is usually achieved by sequentially querying the function at designs selected by an infill-sampling criterion compatible with utility theory. However, most current methods are semantically designed to work only on optimizing or inferring the black-box function itself. We aim to construct a heuristic which can unequivocally deal with the above problems irrespective of the QoI. This paper applies the above mentioned heuristic to infer a specific QoI, namely the expectation (expected value) of the function. The Kullback Leibler (KL) divergence is fairly conspicuous among techniques that used to quantify information gain. In this paper, we derive an expression for the expected KL divergence to sequentially infer our QoI. The analytical tractability provided by the Karhunene Loeve expansion around the Gaussian process (GP) representation of the black-box function allows the circumvention around numerical issues associated with sample averaging. The proposed methodology can be extended to any QoI, with reasonable assumptions. The proposed method is verified and validated on two synthetic functions with varying levels of complexity. We demonstrate the methodology on a steel wire manufacturing problem.





Sign in / Sign up

Export Citation Format

Share Document