scholarly journals Active Learning for Gaussian Process Considering Uncertainties With Application to Shape Control of Composite Fuselage

Author(s):  
Xiaowei Yue ◽  
Yuchen Wen ◽  
Jeffrey H. Hunt ◽  
Jianjun Shi
Entropy ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. 890
Author(s):  
Sergey Oladyshkin ◽  
Farid Mohammadi ◽  
Ilja Kroeker ◽  
Wolfgang Nowak

Gaussian process emulators (GPE) are a machine learning approach that replicates computational demanding models using training runs of that model. Constructing such a surrogate is very challenging and, in the context of Bayesian inference, the training runs should be well invested. The current paper offers a fully Bayesian view on GPEs for Bayesian inference accompanied by Bayesian active learning (BAL). We introduce three BAL strategies that adaptively identify training sets for the GPE using information-theoretic arguments. The first strategy relies on Bayesian model evidence that indicates the GPE’s quality of matching the measurement data, the second strategy is based on relative entropy that indicates the relative information gain for the GPE, and the third is founded on information entropy that indicates the missing information in the GPE. We illustrate the performance of our three strategies using analytical- and carbon-dioxide benchmarks. The paper shows evidence of convergence against a reference solution and demonstrates quantification of post-calibration uncertainty by comparing the introduced three strategies. We conclude that Bayesian model evidence-based and relative entropy-based strategies outperform the entropy-based strategy because the latter can be misleading during the BAL. The relative entropy-based strategy demonstrates superior performance to the Bayesian model evidence-based strategy.


2015 ◽  
Vol 167 ◽  
pp. 122-131 ◽  
Author(s):  
Jin Zhou ◽  
Shiliang Sun

2018 ◽  
Vol 149 (17) ◽  
pp. 174114 ◽  
Author(s):  
Elena Uteva ◽  
Richard S. Graham ◽  
Richard D. Wilkinson ◽  
Richard J. Wheatley

2021 ◽  
Vol 125 ◽  
pp. 101360
Author(s):  
Jorge Chang ◽  
Jiseob Kim ◽  
Byoung-Tak Zhang ◽  
Mark A. Pitt ◽  
Jay I. Myung

2020 ◽  
Vol 3 (1) ◽  
pp. 3
Author(s):  
Riccardo Trinchero ◽  
Flavio Canavero

This paper presents a preliminary version of an active learning (AL) scheme for the sample selection aimed at the development of a surrogate model for the uncertainty quantification based on the Gaussian process regression. The proposed AL strategy iteratively searches for new candidate points to be included within the training set by trying to minimize the relative posterior standard deviation provided by the Gaussian process regression surrogate. The above scheme has been applied for the construction of a surrogate model for the statistical analysis of the efficiency of a switching buck converter as a function of seven uncertain parameters. The performance of the surrogate model constructed via the proposed active learning method is compared with that provided by an equivalent model built via a Latin hypercube sampling. The results of a Monte Carlo simulation with the computational model are used as reference.


2014 ◽  
Vol 26 (8) ◽  
pp. 1519-1541 ◽  
Author(s):  
Mijung Park ◽  
J. Patrick Weller ◽  
Gregory D. Horwitz ◽  
Jonathan W. Pillow

A firing rate map, also known as a tuning curve, describes the nonlinear relationship between a neuron's spike rate and a low-dimensional stimulus (e.g., orientation, head direction, contrast, color). Here we investigate Bayesian active learning methods for estimating firing rate maps in closed-loop neurophysiology experiments. These methods can accelerate the characterization of such maps through the intelligent, adaptive selection of stimuli. Specifically, we explore the manner in which the prior and utility function used in Bayesian active learning affect stimulus selection and performance. Our approach relies on a flexible model that involves a nonlinearly transformed gaussian process (GP) prior over maps and conditionally Poisson spiking. We show that infomax learning, which selects stimuli to maximize the information gain about the firing rate map, exhibits strong dependence on the seemingly innocuous choice of nonlinear transformation function. We derive an alternate utility function that selects stimuli to minimize the average posterior variance of the firing rate map and analyze the surprising relationship between prior parameterization, stimulus selection, and active learning performance in GP-Poisson models. We apply these methods to color tuning measurements of neurons in macaque primary visual cortex.


Sign in / Sign up

Export Citation Format

Share Document