Improving SWATH Seakeeping Performance using Multi-Fidelity Gaussian Process and Bayesian Optimization

2018 ◽  
Vol 62 (4) ◽  
pp. 223-240 ◽  
Author(s):  
Luca Bonfiglio ◽  
Paris Perdikaris ◽  
Giuliano Vernengo ◽  
João Seixas de Medeiros ◽  
George Karniadakis
Author(s):  
Arunabha Batabyal ◽  
Sugrim Sagar ◽  
Jian Zhang ◽  
Tejesh Dube ◽  
Xuehui Yang ◽  
...  

Abstract A persistent problem in the selective laser sintering process is to maintain the quality of additively manufactured parts, which can be attributed to the various sources of uncertainty. In this work, a two-particle phase-field microstructure model has been analyzed. The sources of uncertainty as the two input parameters were surface diffusivity and inter-particle distance. The response quantity of interest (QOI) was selected as the size of the neck region that develops between the two particles. Two different cases with equal and unequal sized particles were studied. It was observed that the neck size increased with increasing surface diffusivity and decreased with increasing inter-particle distance irrespective of particle size. Sensitivity analysis found that the inter-particle distance has more influence on variation in neck size than that of surface diffusivity. The machine learning algorithm Gaussian Process Regression was used to create the surrogate model of the QOI. Bayesian Optimization method was used to find optimal values of the input parameters. For equal-sized particles, optimization using Probability of Improvement provided optimal values of surface diffusivity and inter-particle distance as 23.8268 and 40.0001, respectively. The Expected Improvement as an acquisition function gave optimal values 23.9874 and 40.7428, respectively. For unequal sized particles, optimal design values from Probability of Improvement were 23.9700 and 33.3005, respectively, while those from Expected Improvement were 23.9893 and 33.9627, respectively. The optimization results from the two different acquisition functions seemed to be in good agreement.


2021 ◽  
Author(s):  
Bo Shen ◽  
Raghav Gnanasambandam ◽  
Rongxuan Wang ◽  
Zhenyu Kong

In many scientific and engineering applications, Bayesian optimization (BO) is a powerful tool for hyperparameter tuning of a machine learning model, materials design and discovery, etc. BO guides the choice of experiments in a sequential way to find a good combination of design points in as few experiments as possible. It can be formulated as a problem of optimizing a “black-box” function. Different from single-task Bayesian optimization, Multi-task Bayesian optimization is a general method to efficiently optimize multiple different but correlated “black-box” functions. The previous works in Multi-task Bayesian optimization algorithm queries a point to be evaluated for all tasks in each round of search, which is not efficient. For the case where different tasks are correlated, it is not necessary to evaluate all tasks for a given query point. Therefore, the objective of this work is to develop an algorithm for multi-task Bayesian optimization with automatic task selection so that only one task evaluation is needed per query round. Specifically, a new algorithm, namely, multi-task Gaussian process upper confidence bound (MT-GPUCB), is proposed to achieve this objective. The MT-GPUCB is a two-step algorithm, where the first step chooses which query point to evaluate, and the second step automatically selects the most informative task to evaluate. Under the bandit setting, a theoretical analysis is provided to show that our proposed MT-GPUCB is no-regret under some mild conditions. Our proposed algorithm is verified experimentally on a range of synthetic functions as well as real-world problems. The results clearly show the advantages of our query strategy for both design point and task.


2021 ◽  
Vol 5 (4 (113)) ◽  
pp. 45-54
Author(s):  
Alexander Nechaev ◽  
Vasily Meltsov ◽  
Dmitry Strabykin

Many advanced recommendatory models are implemented using matrix factorization algorithms. Experiments show that the quality of their performance depends significantly on the selected hyperparameters. Analysis of the effectiveness of using various methods for solving this problem of optimizing hyperparameters was made. It has shown that the use of classical Bayesian optimization which treats the model as a «black box» remains the standard solution. However, the models based on matrix factorization have a number of characteristic features. Their use makes it possible to introduce changes in the optimization process leading to a decrease in the time required to find the sought points without losing quality. Modification of the Gaussian process core which is used as a surrogate model for the loss function when performing the Bayesian optimization was proposed. The described modification at first iterations increases the variance of the values predicted by the Gaussian process over a given region of the hyperparameter space. In some cases, this makes it possible to obtain more information about the real form of the investigated loss function in less time. Experiments were carried out using well-known data sets for recommendatory systems. Total optimization time when applying the modification was reduced by 16 % (or 263 seconds) at best and remained the same at worst (less than 1-second difference). In this case, the expected error of the recommendatory model did not change (the absolute difference in values is two orders of magnitude lower than the value of error reduction in the optimization process). Thus, the use of the proposed modification contributes to finding a better set of hyperparameters in less time without loss of quality


2021 ◽  
Author(s):  
◽  
Mashall Aryan

<p>The solution to many science and engineering problems includes identifying the minimum or maximum of an unknown continuous function whose evaluation inflicts non-negligible costs in terms of resources such as money, time, human attention or computational processing. In such a case, the choice of new points to evaluate is critical. A successful approach has been to choose these points by considering a distribution over plausible surfaces, conditioned on all previous points and their evaluations. In this sequential bi-step strategy, also known as Bayesian Optimization, first a prior is defined over possible functions and updated to a posterior in the light of available observations. Then using this posterior, namely the surrogate model, an infill criterion is formed and utilized to find the next location to sample from. By far the most common prior distribution and infill criterion are Gaussian Process and Expected Improvement, respectively.    The popularity of Gaussian Processes in Bayesian optimization is partially due to their ability to represent the posterior in closed form. Nevertheless, the Gaussian Process is afflicted with several shortcomings that directly affect its performance. For example, inference scales poorly with the amount of data, numerical stability degrades with the number of data points, and strong assumptions about the observation model are required, which might not be consistent with reality. These drawbacks encourage us to seek better alternatives. This thesis studies the application of Neural Networks to enhance Bayesian Optimization. It proposes several Bayesian optimization methods that use neural networks either as their surrogates or in the infill criterion.    This thesis introduces a novel Bayesian Optimization method in which Bayesian Neural Networks are used as a surrogate. This has reduced the computational complexity of inference in surrogate from cubic (on the number of observation) in GP to linear. Different variations of Bayesian Neural Networks (BNN) are put into practice and inferred using a Monte Carlo sampling. The results show that Monte Carlo Bayesian Neural Network surrogate could performed better than, or at least comparably to the Gaussian Process-based Bayesian optimization methods on a set of benchmark problems.  This work develops a fast Bayesian Optimization method with an efficient surrogate building process. This new Bayesian Optimization algorithm utilizes Bayesian Random-Vector Functional Link Networks as surrogate. In this family of models the inference is only performed on a small subset of the entire model parameters and the rest are randomly drawn from a prior. The proposed methods are tested on a set of benchmark continuous functions and hyperparameter optimization problems and the results show the proposed methods are competitive with state-of-the-art Bayesian Optimization methods.  This study proposes a novel Neural network-based infill criterion. In this method locations to sample from are found by minimizing the joint conditional likelihood of the new point and parameters of a neural network. The results show that in Bayesian Optimization methods with Bayesian Neural Network surrogates, this new infill criterion outperforms the expected improvement.   Finally, this thesis presents order-preserving generative models and uses it in a variational Bayesian context to infer Implicit Variational Bayesian Neural Network (IVBNN) surrogates for a new Bayesian Optimization. This new inference mechanism is more efficient and scalable than Monte Carlo sampling. The results show that IVBNN could outperform Monte Carlo BNN in Bayesian optimization of hyperparameters of machine learning models.</p>


2021 ◽  
Vol 3 (11) ◽  
pp. 2170077
Author(s):  
Yee-Fun Lim ◽  
Chee Koon Ng ◽  
U.S. Vaitesswar ◽  
Kedar Hippalgaonkar

2021 ◽  
Author(s):  
Kundo Park ◽  
Youngsoo Kim ◽  
Minki Kim ◽  
Chihyeon Song ◽  
Jinkyoo Park ◽  
...  

The staggered platelet composite structure, one of the most well-known examples of biomimetics, is inspired by the microstructure of nacre, where stiff mineral platelets are stacked with a small fraction of soft polymer in a brick-and-mortar style. Significant efforts have been made to establish a framework for designing a staggered platelet pattern that achieves an excellent balance of toughness and stiffness. However, because no analytical formula for accurately predicting its toughness is available because of the complexity of the failure mechanism of realistic composites, existing studies have investigated either idealized composites with simplified material properties or realistic composites designed by heuristics. In the present study, we propose a Bayesian optimization framework to design a staggered platelet structure that renders high toughness. Gaussian process regression (GPR) was adopted to model statistically the complex relationship between the shape of the staggered platelet array and the resultant toughness. The Markov chain Monte Carlo algorithm was used to determine the optimal kernel hyperparameter set for the GPR. Starting with 14 initial training data collected with uniaxial tensile tests, a GPR-based Bayesian optimization using the expected improvement (EI) acquisition function was carried out. As a result, it was possible to design a staggered platelet pattern with a toughness 11% higher than that of the best sample in the initial training set, and this improvement was achieved after only three iterations of our optimization cycle. As this optimization framework does not require any material theories and models, this process can be easily adapted and applied to various other material optimization problems based on a limited set of experiments or computational simulations.


Procedia CIRP ◽  
2020 ◽  
Vol 88 ◽  
pp. 306-311
Author(s):  
Markus Maier ◽  
Alisa Rupenyan ◽  
Mansur Akbari ◽  
Ruben Zwicker ◽  
Konrad Wegener

Sign in / Sign up

Export Citation Format

Share Document