design of computer experiments
Recently Published Documents


TOTAL DOCUMENTS

36
(FIVE YEARS 5)

H-INDEX

10
(FIVE YEARS 0)

2021 ◽  
pp. 875529302110333
Author(s):  
Henry Burton ◽  
Hongquan Xu ◽  
Zhengxiang Yi

Surrogate models are growing in popularity within the earthquake engineering community because of their ability to increase the efficiency of computationally intensive tasks. This article examines the design of computer experiments (DoCE) for the purpose of developing seismic surrogate models. Two categories of DoCE approaches are discussed while underscoring the benefits and drawbacks of specific methods. Further insight is provided through an illustrative case study that develops surrogate models to predict the median collapse capacity and expected annual losses in single-family woodframe buildings with cripple walls. The implications of the chosen DoCE method on the predictive performance and efficiency (in terms of the required number of explicit simulations) of the associated surrogate models are closely examined. The results show that both factorial and Latin hypercube designs [Formula: see text] do not perform well when different approaches are used to generate the training and testing sets (i.e. out-of-design-type testing). An efficient hybrid design that combines an orthogonal array-based composite design with a small number of [Formula: see text] samples is shown to produce a surrogate model with superior predictive performance.


Author(s):  
Javier Garcia-Barcos ◽  
Ruben Martinez-Cantin

Bayesian optimization has become a popular method for applications, like the design of computer experiments or hyperparameter tuning of expensive models, where sample efficiency is mandatory. These situations or high-throughput computing, where distributed and scalable architectures are a necessity. However, Bayesian optimization is mostly sequential. Even parallel variants require certain computations between samples, limiting the parallelization bandwidth. Thompson sampling has been previously applied for distributed Bayesian optimization. But, when compared with other acquisition functions in the sequential setting, Thompson sampling is known to perform suboptimally. In this paper, we present a new method for fully distributed Bayesian optimization, which can be combined with any acquisition function. Our approach considers Bayesian optimization as a partially observable Markov decision process. In this context, stochastic policies, such as the Boltzmann policy, have some interesting properties which can also be studied for Bayesian optimization. Furthermore, the Boltzmann policy trivially allows a distributed Bayesian optimization implementation with high level of parallelism and scalability. We present results in several benchmarks and applications that shows the performance of our method.


2017 ◽  
Vol 106 ◽  
pp. 71-95 ◽  
Author(s):  
Sushant S. Garud ◽  
Iftekhar A. Karimi ◽  
Markus Kraft

Sign in / Sign up

Export Citation Format

Share Document