scholarly journals Calculating Partial Expected Value of Perfect Information via Monte Carlo Sampling Algorithms

2007 ◽  
Vol 27 (4) ◽  
pp. 448-470 ◽  
Author(s):  
Alan Brennan ◽  
Samer Kharroubi ◽  
Anthony O'Hagan ◽  
Jim Chilcott
2008 ◽  
Vol 11 (7) ◽  
pp. 1070-1080 ◽  
Author(s):  
Jan B. Oostenbrink ◽  
Maiwenn J. Al ◽  
Mark Oppe ◽  
Maureen P.M.H. Rutten-van Mölken

Author(s):  
Sarouyeh Khoshkholgh ◽  
Andrea Zunino ◽  
Klaus Mosegaard

Summary Any search or sampling algorithm for solution of inverse problems needs guidance to be efficient. Many algorithms collect and apply information about the problem on the fly, and much improvement has been made in this way. However, as a consequence of the No-Free-Lunch Theorem, the only way we can ensure a significantly better performance of search and sampling algorithms is to build in as much external information about the problem as possible. In the special case of Markov Chain Monte Carlo sampling (MCMC) we review how this is done through the choice of proposal distribution, and we show how this way of adding more information about the problem can be made particularly efficient when based on an approximate physics model of the problem. A highly nonlinear inverse scattering problem with a high-dimensional model space serves as an illustration of the gain of efficiency through this approach.


Sign in / Sign up

Export Citation Format

Share Document