Uncertainty quantification of pollutant source retrieval: comparison of Bayesian methods with application to the Chernobyl and Fukushima Daiichi accidental releases of radionuclides

2017 ◽  
Vol 143 (708) ◽  
pp. 2886-2901 ◽  
Author(s):  
Y. Liu ◽  
J.-M. Haussaire ◽  
M. Bocquet ◽  
Y. Roustan ◽  
O. Saunier ◽  
...  
Geophysics ◽  
2019 ◽  
Vol 84 (6) ◽  
pp. R1003-R1020 ◽  
Author(s):  
Georgia K. Stuart ◽  
Susan E. Minkoff ◽  
Felipe Pereira

Bayesian methods for full-waveform inversion allow quantification of uncertainty in the solution, including determination of interval estimates and posterior distributions of the model unknowns. Markov chain Monte Carlo (MCMC) methods produce posterior distributions subject to fewer assumptions, such as normality, than deterministic Bayesian methods. However, MCMC is computationally a very expensive process that requires repeated solution of the wave equation for different velocity samples. Ultimately, a large proportion of these samples (often 40%–90%) is rejected. We have evaluated a two-stage MCMC algorithm that uses a coarse-grid filter to quickly reject unacceptable velocity proposals, thereby reducing the computational expense of solving the velocity inversion problem and quantifying uncertainty. Our filter stage uses operator upscaling, which provides near-perfect speedup in parallel with essentially no communication between processes and produces data that are highly correlated with those obtained from the full fine-grid solution. Four numerical experiments demonstrate the efficiency and accuracy of the method. The two-stage MCMC algorithm produce the same results (i.e., posterior distributions and uncertainty information, such as medians and highest posterior density intervals) as the Metropolis-Hastings MCMC. Thus, no information needed for uncertainty quantification is compromised when replacing the one-stage MCMC with the more computationally efficient two-stage MCMC. In four representative experiments, the two-stage method reduces the time spent on rejected models by one-third to one-half, which is important because most of models tried during the course of the MCMC algorithm are rejected. Furthermore, the two-stage MCMC algorithm substantially reduced the overall time-per-trial by as much as 40%, while increasing the acceptance rate from 9% to 90%.


2020 ◽  
Vol 1 ◽  
Author(s):  
Jarkko Suuronen ◽  
Muhammad Emzir ◽  
Sari Lasanen ◽  
Simo Särkkä ◽  
Lassi Roininen

Abstract X-ray tomography has applications in various industrial fields such as sawmill industry, oil and gas industry, as well as chemical, biomedical, and geotechnical engineering. In this article, we study Bayesian methods for the X-ray tomography reconstruction. In Bayesian methods, the inverse problem of tomographic reconstruction is solved with the help of a statistical prior distribution which encodes the possible internal structures by assigning probabilities for smoothness and edge distribution of the object. We compare Gaussian random field priors, that favor smoothness, to non-Gaussian total variation (TV), Besov, and Cauchy priors which promote sharp edges and high- and low-contrast areas in the object. We also present computational schemes for solving the resulting high-dimensional Bayesian inverse problem with 100,000–1,000,000 unknowns. We study the applicability of a no-U-turn variant of Hamiltonian Monte Carlo (HMC) methods and of a more classical adaptive Metropolis-within-Gibbs (MwG) algorithm to enable full uncertainty quantification of the reconstructions. We use maximum a posteriori (MAP) estimates with limited-memory BFGS (Broyden–Fletcher–Goldfarb–Shanno) optimization algorithm. As the first industrial application, we consider sawmill industry X-ray log tomography. The logs have knots, rotten parts, and even possibly metallic pieces, making them good examples for non-Gaussian priors. Secondly, we study drill-core rock sample tomography, an example from oil and gas industry. In that case, we compare the priors without uncertainty quantification. We show that Cauchy priors produce smaller number of artefacts than other choices, especially with sparse high-noise measurements, and choosing HMC enables systematic uncertainty quantification, provided that the posterior is not pathologically multimodal or heavy-tailed.


2013 ◽  
Vol 7 (4) ◽  
Author(s):  
T. J. Sullivan ◽  
M. McKerns ◽  
M. Ortiz ◽  
H. Owhadi ◽  
C. Scovel

We discuss recent mathematical and computational results on uncertainty quantification (UQ) in the presence of uncertainty about the correct probabilistic and physical models. Such UQ problems can be formulated as constrained optimization problems with information acting as the constraints, with consequent optimal assessments of risk, and advantages for interdisciplinary communication and open science. We also report consequences of this point of view for the robustness of Bayesian methods under prior perturbation.


Author(s):  
Yufeng Xia ◽  
Jun Zhang ◽  
Tingsong Jiang ◽  
Zhiqiang Gong ◽  
Wen Yao ◽  
...  

AbstractQuantifying predictive uncertainty in deep neural networks is a challenging and yet unsolved problem. Existing quantification approaches can be categorized into two lines. Bayesian methods provide a complete uncertainty quantification theory but are often not scalable to large-scale models. Along another line, non-Bayesian methods have good scalability and can quantify uncertainty with high quality. The most remarkable idea in this line is Deep Ensemble, but it is limited in practice due to its expensive computational cost. Thus, we propose HatchEnsemble to improve the efficiency and practicality of Deep Ensemble. The main idea is to use function-preserving transformations, ensuring HatchNets to inherit the knowledge learned by a single model called SeedNet. This process is called hatching, and HatchNet can be obtained by continuously widening the SeedNet. Based on our method, two different hatches are proposed, respectively, for ensembling the same and different architecture networks. To ensure the diversity of models, we also add random noises to parameters during hatching. Experiments on both clean and corrupted datasets show that HatchEnsemble can give a competitive prediction performance and better-calibrated uncertainty quantification in a shorter time compared with baselines.


2019 ◽  
Vol 62 (3) ◽  
pp. 577-586 ◽  
Author(s):  
Garnett P. McMillan ◽  
John B. Cannon

Purpose This article presents a basic exploration of Bayesian inference to inform researchers unfamiliar to this type of analysis of the many advantages this readily available approach provides. Method First, we demonstrate the development of Bayes' theorem, the cornerstone of Bayesian statistics, into an iterative process of updating priors. Working with a few assumptions, including normalcy and conjugacy of prior distribution, we express how one would calculate the posterior distribution using the prior distribution and the likelihood of the parameter. Next, we move to an example in auditory research by considering the effect of sound therapy for reducing the perceived loudness of tinnitus. In this case, as well as most real-world settings, we turn to Markov chain simulations because the assumptions allowing for easy calculations no longer hold. Using Markov chain Monte Carlo methods, we can illustrate several analysis solutions given by a straightforward Bayesian approach. Conclusion Bayesian methods are widely applicable and can help scientists overcome analysis problems, including how to include existing information, run interim analysis, achieve consensus through measurement, and, most importantly, interpret results correctly. Supplemental Material https://doi.org/10.23641/asha.7822592


Sign in / Sign up

Export Citation Format

Share Document