multifidelity optimization
Recently Published Documents


TOTAL DOCUMENTS

21
(FIVE YEARS 8)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
Author(s):  
John Jasa ◽  
Pietro Bortolotti ◽  
Daniel Zalkind ◽  
Garrett Barter

Abstract. Wind turbines are complex multidisciplinary systems that are challenging to design because of the tightly coupled interactions between different subsystems. Computational modeling attempts to resolve these couplings so we can efficiently explore new wind turbine systems early in the design process. Low-fidelity models are computationally efficient but make assumptions and simplifications that limit the accuracy of design studies, whereas high-fidelity models capture more of the actual physics but with increased computational cost. This paper details the use of multifidelity methods for optimizing wind turbine designs by using information from both low- and high-fidelity models to find an optimal solution at reduced cost. Specifically, a trust-region approach is used with a novel corrective function built from a nonlinear surrogate model. We find that for a diverse set of design problems—with examples given in rotor blade geometry design, wind turbine controller design, and wind power plant layout optimization—the multifidelity method finds the optimal design using 38 %–58 % the computational cost of the high-fidelity-only optimization. The success of the multifidelity method in disparate applications suggests that it could be more broadly applied to other wind energy or otherwise generic applications.


2021 ◽  
Vol 143 (8) ◽  
Author(s):  
Brian Chell ◽  
Steven Hoffenson ◽  
Cory J. G. Philippe ◽  
Mark R. Blackburn

Abstract Multifidelity optimization leverages the fast run times of low-fidelity models with the accuracy of high-fidelity models (HFMs), in order to conserve computing resources while still reaching optimal solutions. This work focuses on the multifidelity multidisciplinary optimization of an aircraft system model with finite element analysis and computational fluid dynamics simulations in the loop. A two-step filtering method is used where a lower fidelity model is optimized, and then the solution is used as a starting point for a higher-fidelity optimization routine. By starting the high-fidelity routine at a nearly optimal region of the design space, the computing resources required for optimization are expected to decrease when using local algorithms. Results show that, when using surrogates for the lower fidelity models, the multifidelity workflows save statistically significant amounts of time over optimizing the original HFM alone. However, the impact on solution quality varies depending on the model behavior and optimization algorithm.


SPE Journal ◽  
2021 ◽  
pp. 1-22
Author(s):  
Faliang Yin ◽  
Xiaoming Xue ◽  
Chengze Zhang ◽  
Kai Zhang ◽  
Jianfa Han ◽  
...  

Summary Production optimization led by computing intelligence can greatly improve oilfield economic effectiveness. However, it is confronted with huge computational challenge because of the expensive black-box objective function and the high-dimensional design variables. Many low-fidelity methods based on simplified physical models or data-driven models have been proposed to reduce evaluation costs. These methods can approximate the global fitness landscape to a certain extent, but it is difficult to ensure accuracy and correlation in local areas. Multifidelity methods have been proposed to balance the advantages of the two, but most of the current methods rely on complex computational models. Through a simple but efficient shortcut, our work aims to establish a novel production-optimization framework using genetic transfer learning to accelerate convergence and improve the quality of optimal solution using results from different fidelities. Net present value (NPV) is a widely used standard to comprehensively evaluate the economic value of a strategy in production optimization. On the basis of NPV, we first established a multifidelity optimization model that can synthesize the reference information from high-fidelity tasks and the approximate results from low-fidelity tasks. Then, we introduce the concept of relative fidelity as an indicator for quantifying the dynamic reliability of low-fidelity methods, and further propose a two-mode multifidelity genetic transfer learning framework that balances computing resources for tasks with different fidelity levels. The multitasking mode takes the elite solution as the transfer medium and forms a closed-loop feedback system through the information exchange between low- and high-fidelity tasks in parallel. Sequential transfer mode, a one-way algorithm, transfers the elite solutions archived in the previous mode as the population to high-fidelity domain for further optimization. This framework is suitable for population-based optimization algorithms with variable search direction and step size. The core work of this paper is to realize the framework by means of differential evolution (DE), for which we propose the multifidelity transfer differential evolution (MTDE). Corresponding to multitasking and sequential transfer in the framework, MTDE includes two modes, transfer based on base vector (b-transfer) and transfer based on population (p-transfer). The b-transfer mode incorporates the unique advantages of DE into fidelity switching, whereas the p-transfer mode adaptively conducts population for further high-fidelity local search. Finally, the production-optimization performance of MTDE is validated with the egg model and two real field cases, in which the black-oil and streamline models are used to obtain high- and low-fidelity results, respectively. We also compared the convergence curves and optimization results with the single-fidelity method and the greedy multifidelity method. The results show that the proposed algorithm has a faster convergence rate and a higher-qualitywell-control strategy. The adaptive capacity of p-transfer is also demonstrated in three distinct cases. At the end of the paper, we discuss the generalization potential of the proposed framework.


Author(s):  
Mingyang Li ◽  
Zequn Wang

Abstract Most of the existing reliability-based design optimization (RBDO) are not capable of analyzing data from multifidelity sources to improve the confidence of optimal solution while maintaining computational efficiency. In this paper, we propose a novel reliability-based multifidelity optimization (RBMO) framework that adaptively integrates both low- and high-fidelity data for achieving reliable optimal designs. The Gaussian process (GP) modeling technique is first utilized to build a hybrid surrogate model by fusing data sources with different fidelity levels. To reduce the number of low- and high-fidelity data, an adaptive hybrid learning (AHL) algorithm is then developed to efficiently update the hybrid model. The updated hybrid surrogate model is used for reliability and sensitivity analyses in solving an RBDO problem, which provides a pseudo-optimal solution in the RBMO framework. An optimal solution that meets the reliability targets can be achieved by sequentially performing the adaptive hybrid learning at the iterative pseudo-optimal designs and solving RBDO problems. The effectiveness of the proposed framework is demonstrated through three case studies.


Author(s):  
Brian Chell ◽  
Steven Hoffenson ◽  
Mark R. Blackburn

Abstract Multifidelity optimization leverages the fast run times of low-fidelity models with the accuracy of high-fidelity models, in order to conserve computing resources while still reaching optimal solutions. This work focuses on the multidisciplinary multifidelity optimization of an unmanned aerial system model with finite element analysis and computational fluid dynamics simulations in-the-loop. A two-step process is used where the lower-fidelity models are optimized, and then the optimizer is used as a starting point for the higher-fidelity models. By starting the high-fidelity optimization routine at a nearly optimal section of the design space, the computing resources required for optimization are expected to decrease when using gradient-based algorithms. Results show that, at least in some cases, the multifidelity workflows save time over optimizing the original high fidelity model alone. However, the model management strategy did not find statistically significant differences between the differing optimization approaches when used on this test problem.


2019 ◽  
Vol 56 (2) ◽  
pp. 442-456 ◽  
Author(s):  
Thomas A. Reist ◽  
David W. Zingg ◽  
Mark Rakowitz ◽  
Graham Potter ◽  
Sid Banerjee

2018 ◽  
Vol 22 (6) ◽  
pp. 836-850 ◽  
Author(s):  
Handing Wang ◽  
Yaochu Jin ◽  
John Doherty

Sign in / Sign up

Export Citation Format

Share Document