scholarly journals Constrained multi-fidelity surrogate framework using Bayesian optimization with non-intrusive reduced-order basis

Author(s):  
Hanane Khatouri ◽  
Tariq Benamara ◽  
Piotr Breitkopf ◽  
Jean Demange ◽  
Paul Feliot

AbstractThis article addresses the problem of constrained derivative-free optimization in a multi-fidelity (or variable-complexity) framework using Bayesian optimization techniques. It is assumed that the objective and constraints involved in the optimization problem can be evaluated using either an accurate but time-consuming computer program or a fast lower-fidelity one. In this setting, the aim is to solve the optimization problem using as few calls to the high-fidelity program as possible. To this end, it is proposed to use Gaussian process models with trend functions built from the projection of low-fidelity solutions on a reduced-order basis synthesized from scarce high-fidelity snapshots. A study on the ability of such models to accurately represent the objective and the constraints and a comparison of two improvement-based infill strategies are performed on a representative benchmark test case.

Author(s):  
Guiying Li ◽  
Chao Qian ◽  
Chunhui Jiang ◽  
Xiaofen Lu ◽  
Ke Tang

Layer-wise magnitude-based pruning (LMP) is a very popular method for deep neural network (DNN) compression. However, tuning the layer-specific thresholds is a difficult task, since the space of threshold candidates is exponentially large and the evaluation is very expensive. Previous methods are mainly by hand and require expertise. In this paper, we propose an automatic tuning approach based on optimization, named OLMP. The idea is to transform the threshold tuning problem into a constrained optimization problem (i.e., minimizing the size of the pruned model subject to a constraint on the accuracy loss), and then use powerful derivative-free optimization algorithms to solve it. To compress a trained DNN, OLMP is conducted within a new iterative pruning and adjusting pipeline. Empirical results show that OLMP can achieve the best pruning ratio on LeNet-style models (i.e., 114 times for LeNet-300-100 and 298 times for LeNet-5) compared with some state-of-the- art DNN pruning methods, and can reduce the size of an AlexNet-style network up to 82 times without accuracy loss.


2021 ◽  
Vol 147 ◽  
pp. 107249
Author(s):  
E. A. del Rio Chanona ◽  
P. Petsagkourakis ◽  
E. Bradford ◽  
J. E. Alves Graciano ◽  
B. Chachuat

2021 ◽  
Vol 11 (5) ◽  
pp. 2171
Author(s):  
Bomi Kim ◽  
Taehyeon Kim ◽  
Yoonsik Choe

Incremental learning is a methodology that continuously uses the sequential input data to extend the existing network’s knowledge. The layer sharing algorithm is one of the representative methods which leverages general knowledge by sharing some initial layers of the existing network. To determine the performance of the incremental network, it is critical to estimate how much the initial convolutional layers in the existing network can be shared as the fixed feature extractors. However, the existing algorithm selects the sharing configuration through improper optimization strategy but a brute force manner such as searching for all possible sharing layers case. This is a non-convex and non-differential problem. Accordingly, this can not be solved using powerful optimization techniques such as the gradient descent algorithm or other convex optimization problem, and it leads to high computational complexity. To solve this problem, we firstly define this as a discrete combinatorial optimization problem, and propose a novel efficient incremental learning algorithm-based Bayesian optimization, which guarantees the global convergence in a non-convex and non-differential optimization. Additionally, our proposed algorithm can adaptively find the optimal number of sharing layers via adjusting the threshold accuracy parameter in the proposed loss function. With the proposed method, the global optimal sharing layer can be found in only six or eight iterations without searching for all possible layer cases. Hence, the proposed method can find the global optimal sharing layers by utilizing Bayesian optimization, which achieves both high combined accuracy and low computational complexity.


2007 ◽  
Vol 572 ◽  
pp. 13-36 ◽  
Author(s):  
ALISON L. MARSDEN ◽  
MENG WANG ◽  
J. E. DENNIS ◽  
PARVIZ MOIN

Derivative-free optimization techniques are applied in conjunction with large-eddy simulation (LES) to reduce the noise generated by turbulent flow over a hydrofoil trailing edge. A cost function proportional to the radiated acoustic power is derived based on the Ffowcs Williams and Hall solution to Lighthill's equation. Optimization is performed using the surrogate-management framework with filter-based constraints for lift and drag. To make the optimization more efficient, a novel method has been developed to incorporate Reynolds-averaged Navier–Stokes (RANS) calculations for constraint evaluation. Separation of the constraint and cost-function computations using this method results in fewer expensive LES computations. This work demonstrates the ability to fully couple optimization to large-eddy simulation for time-accurate turbulent flow. The results demonstrate an 89% reduction in noise power, which comes about primarily by the elimination of low-frequency vortex shedding. The higher-frequency broadband noise is reduced as well, by a subtle change in the lower surface near the trailing edge.


2020 ◽  
Vol 178 ◽  
pp. 65-74
Author(s):  
Ksenia Balabaeva ◽  
Liya Akmadieva ◽  
Sergey Kovalchuk

2021 ◽  
Author(s):  
Faruk Alpak ◽  
Yixuan Wang ◽  
Guohua Gao ◽  
Vivek Jain

Abstract Recently, a novel distributed quasi-Newton (DQN) derivative-free optimization (DFO) method was developed for generic reservoir performance optimization problems including well-location optimization (WLO) and well-control optimization (WCO). DQN is designed to effectively locate multiple local optima of highly nonlinear optimization problems. However, its performance has neither been validated by realistic applications nor compared to other DFO methods. We have integrated DQN into a versatile field-development optimization platform designed specifically for iterative workflows enabled through distributed-parallel flow simulations. DQN is benchmarked against alternative DFO techniques, namely, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) method hybridized with Direct Pattern Search (BFGS-DPS), Mesh Adaptive Direct Search (MADS), Particle Swarm Optimization (PSO), and Genetic Algorithm (GA). DQN is a multi-thread optimization method that distributes an ensemble of optimization tasks among multiple high-performance-computing nodes. Thus, it can locate multiple optima of the objective function in parallel within a single run. Simulation results computed from one DQN optimization thread are shared with others by updating a unified set of training data points composed of responses (implicit variables) of all successful simulation jobs. The sensitivity matrix at the current best solution of each optimization thread is approximated by a linear-interpolation technique using all or a subset of training-data points. The gradient of the objective function is analytically computed using the estimated sensitivities of implicit variables with respect to explicit variables. The Hessian matrix is then updated using the quasi-Newton method. A new search point for each thread is solved from a trust-region subproblem for the next iteration. In contrast, other DFO methods rely on a single-thread optimization paradigm that can only locate a single optimum. To locate multiple optima, one must repeat the same optimization process multiple times starting from different initial guesses for such methods. Moreover, simulation results generated from a single-thread optimization task cannot be shared with other tasks. Benchmarking results are presented for synthetic yet challenging WLO and WCO problems. Finally, DQN method is field-tested on two realistic applications. DQN identifies the global optimum with the least number of simulations and the shortest run time on a synthetic problem with known solution. On other benchmarking problems without a known solution, DQN identified compatible local optima with reasonably smaller numbers of simulations compared to alternative techniques. Field-testing results reinforce the auspicious computational attributes of DQN. Overall, the results indicate that DQN is a novel and effective parallel algorithm for field-scale development optimization problems.


Sign in / Sign up

Export Citation Format

Share Document