Convergence Rate Evaluation of Derivative-Free Optimization Techniques

Author(s):  
Thomas Lux
2007 ◽  
Vol 572 ◽  
pp. 13-36 ◽  
Author(s):  
ALISON L. MARSDEN ◽  
MENG WANG ◽  
J. E. DENNIS ◽  
PARVIZ MOIN

Derivative-free optimization techniques are applied in conjunction with large-eddy simulation (LES) to reduce the noise generated by turbulent flow over a hydrofoil trailing edge. A cost function proportional to the radiated acoustic power is derived based on the Ffowcs Williams and Hall solution to Lighthill's equation. Optimization is performed using the surrogate-management framework with filter-based constraints for lift and drag. To make the optimization more efficient, a novel method has been developed to incorporate Reynolds-averaged Navier–Stokes (RANS) calculations for constraint evaluation. Separation of the constraint and cost-function computations using this method results in fewer expensive LES computations. This work demonstrates the ability to fully couple optimization to large-eddy simulation for time-accurate turbulent flow. The results demonstrate an 89% reduction in noise power, which comes about primarily by the elimination of low-frequency vortex shedding. The higher-frequency broadband noise is reduced as well, by a subtle change in the lower surface near the trailing edge.


Author(s):  
Hanane Khatouri ◽  
Tariq Benamara ◽  
Piotr Breitkopf ◽  
Jean Demange ◽  
Paul Feliot

AbstractThis article addresses the problem of constrained derivative-free optimization in a multi-fidelity (or variable-complexity) framework using Bayesian optimization techniques. It is assumed that the objective and constraints involved in the optimization problem can be evaluated using either an accurate but time-consuming computer program or a fast lower-fidelity one. In this setting, the aim is to solve the optimization problem using as few calls to the high-fidelity program as possible. To this end, it is proposed to use Gaussian process models with trend functions built from the projection of low-fidelity solutions on a reduced-order basis synthesized from scarce high-fidelity snapshots. A study on the ability of such models to accurately represent the objective and the constraints and a comparison of two improvement-based infill strategies are performed on a representative benchmark test case.


2020 ◽  
Vol 178 ◽  
pp. 65-74
Author(s):  
Ksenia Balabaeva ◽  
Liya Akmadieva ◽  
Sergey Kovalchuk

2021 ◽  
Author(s):  
Faruk Alpak ◽  
Yixuan Wang ◽  
Guohua Gao ◽  
Vivek Jain

Abstract Recently, a novel distributed quasi-Newton (DQN) derivative-free optimization (DFO) method was developed for generic reservoir performance optimization problems including well-location optimization (WLO) and well-control optimization (WCO). DQN is designed to effectively locate multiple local optima of highly nonlinear optimization problems. However, its performance has neither been validated by realistic applications nor compared to other DFO methods. We have integrated DQN into a versatile field-development optimization platform designed specifically for iterative workflows enabled through distributed-parallel flow simulations. DQN is benchmarked against alternative DFO techniques, namely, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) method hybridized with Direct Pattern Search (BFGS-DPS), Mesh Adaptive Direct Search (MADS), Particle Swarm Optimization (PSO), and Genetic Algorithm (GA). DQN is a multi-thread optimization method that distributes an ensemble of optimization tasks among multiple high-performance-computing nodes. Thus, it can locate multiple optima of the objective function in parallel within a single run. Simulation results computed from one DQN optimization thread are shared with others by updating a unified set of training data points composed of responses (implicit variables) of all successful simulation jobs. The sensitivity matrix at the current best solution of each optimization thread is approximated by a linear-interpolation technique using all or a subset of training-data points. The gradient of the objective function is analytically computed using the estimated sensitivities of implicit variables with respect to explicit variables. The Hessian matrix is then updated using the quasi-Newton method. A new search point for each thread is solved from a trust-region subproblem for the next iteration. In contrast, other DFO methods rely on a single-thread optimization paradigm that can only locate a single optimum. To locate multiple optima, one must repeat the same optimization process multiple times starting from different initial guesses for such methods. Moreover, simulation results generated from a single-thread optimization task cannot be shared with other tasks. Benchmarking results are presented for synthetic yet challenging WLO and WCO problems. Finally, DQN method is field-tested on two realistic applications. DQN identifies the global optimum with the least number of simulations and the shortest run time on a synthetic problem with known solution. On other benchmarking problems without a known solution, DQN identified compatible local optima with reasonably smaller numbers of simulations compared to alternative techniques. Field-testing results reinforce the auspicious computational attributes of DQN. Overall, the results indicate that DQN is a novel and effective parallel algorithm for field-scale development optimization problems.


Sign in / Sign up

Export Citation Format

Share Document