A New Variable Fidelity Optimization Framework Based on Model Fusion and Objective-Oriented Sequential Sampling

Author(s):  
Ying Xiong ◽  
Wei Chen ◽  
Kwok-Leung Tsui

Computational models with variable fidelity have been widely used in engineering design. To alleviate the computational burden, surrogate models are used for optimization without recourse to expensive high-fidelity simulations. In this work, a model fusion technique based on Bayesian Gaussian process modeling is employed to construct cheap, surrogate models to integrate information from both low-fidelity and high-fidelity models, while the interpolation uncertainty of the surrogate model due to the lack of sufficient high-fidelity simulations is quantified. In contrast to space filling, the sequential sampling of a high-fidelity simulation model in our proposed framework is objective-oriented, aiming for improving a design objective. Strategy based on periodical switching criteria is studied which is shown to be effective in guiding the sequential sampling of a high-fidelity model towards improving a design objective as well as reducing the interpolation uncertainty. A design confidence (DC) metric is proposed to serves as the stopping criterion to facilitate design decision making against the interpolation uncertainty. Numerical and engineering examples are provided to demonstrate the benefits of the proposed methodology.

2008 ◽  
Vol 130 (11) ◽  
Author(s):  
Ying Xiong ◽  
Wei Chen ◽  
Kwok-Leung Tsui

Computational models with variable fidelity have been widely used in engineering design. To alleviate the computational burden, surrogate models are used for optimization without directly invoking expensive high-fidelity simulations. In this work, a model fusion technique based on the Bayesian–Gaussian process modeling is employed to construct cheap surrogate models to integrate information from both low-fidelity and high-fidelity models, while the interpolation uncertainty of the surrogate model due to the lack of sufficient high-fidelity simulations is quantified. In contrast to space filling, the sequential sampling of a high-fidelity simulation model in our proposed framework is objective-oriented, aiming for improving a design objective. Strategy based on periodical switching criteria is studied, which is shown to be effective in guiding the sequential sampling of a high-fidelity model toward improving a design objective as well as reducing the interpolation uncertainty. A design confidence metric is proposed as the stopping criterion to facilitate design decision making against the interpolation uncertainty. Examples are provided to illustrate the key ideas and features of model fusion, sequential sampling, and design confidence—the three key elements in the proposed variable-fidelity optimization framework.


Author(s):  
Matthew A. Williams ◽  
Andrew G. Alleyne

In the early stages of control system development, designers often require multiple iterations for purposes of validating control designs in simulation. This has the potential to make high fidelity models undesirable due to increased computational complexity and time required for simulation. As a solution, lower fidelity or simplified models are used for initial designs before controllers are tested on higher fidelity models. In the event that unmodeled dynamics cause the controller to fail when applied on a higher fidelity model, an iterative approach involving designing and validating a controller’s performance may be required. In this paper, a switched-fidelity modeling formulation for closed loop dynamical systems is proposed to reduce computational effort while maintaining elevated accuracy levels of system outputs and control inputs. The effects on computational effort and accuracy are investigated by applying the formulation to a traditional vapor compression system with high and low fidelity models of the evaporator and condenser. This sample case showed the ability of the switched fidelity framework to closely match the outputs and inputs of the high fidelity model while decreasing computational cost by 32% from the high fidelity model. For contrast, the low fidelity model decreases computational cost by 48% relative to the high fidelity model.


Author(s):  
Gilberto Meji´a Rodri´guez ◽  
John E. Renaud ◽  
Vikas Tomar

Research applications involving design tool development for multiple phase material design are at an early stage of development. The computational requirements of advanced numerical tools for simulating material behavior such as the finite element method (FEM) and the molecular dynamics method (MD) can prohibit direct integration of these tools in a design optimization procedure where multiple iterations are required. The complexity of multiphase material behavior at multiple scales restricts the development of a comprehensive meta-model that can be used to replace the multiscale analysis. One, therefore, requires a design approach that can incorporate multiple simulations (multi-physics) of varying fidelity such as FEM and MD in an iterative model management framework that can significantly reduce design cycle times. In this research a material design tool based on a variable fidelity model management framework is presented. In the variable fidelity material design tool, complex “high fidelity” FEM analyses are performed only to guide the analytic “low-fidelity” model toward the optimal material design. The tool is applied to obtain the optimal distribution of a second phase, consisting of silicon carbide (SiC) fibers, in a silicon-nitride (Si3N4) matrix to obtain continuous fiber SiC-Si3N4 ceramic composites (CFCCs) with optimal fracture toughness. Using the variable fidelity material design tool in application to one test problem, a reduction in design cycle time around 80 percent is achieved as compared to using a conventional design optimization approach that exclusively calls the high fidelity FEM.


2008 ◽  
Vol 130 (9) ◽  
Author(s):  
Gilberto Mejía-Rodríguez ◽  
John E. Renaud ◽  
Vikas Tomar

Research applications involving design tool development for multi phase material design are at an early stage of development. The computational requirements of advanced numerical tools for simulating material behavior such as the finite element method (FEM) and the molecular dynamics (MD) method can prohibit direct integration of these tools in a design optimization procedure where multiple iterations are required. One, therefore, requires a design approach that can incorporate multiple simulations (multiphysics) of varying fidelity such as FEM and MD in an iterative model management framework that can significantly reduce design cycle times. In this research a material design tool based on a variable fidelity model management framework is presented. In the variable fidelity material design tool, complex “high-fidelity” FEM analyses are performed only to guide the analytic “low-fidelity” model toward the optimal material design. The tool is applied to obtain the optimal distribution of a second phase, consisting of silicon carbide (SiC) fibers, in a silicon-nitride (Si3N4) matrix to obtain continuous fiber SiC–Si3N4 ceramic composites with optimal fracture toughness. Using the variable fidelity material design tool in application to two test problems, a reduction in design cycle times of between 40% and 80% is achieved as compared to using a conventional design optimization approach that exclusively calls the high-fidelity FEM. The optimal design obtained using the variable fidelity approach is the same as that obtained using the conventional procedure. The variable fidelity material design tool is extensible to multiscale multiphase material design by using MD based material performance analyses as the high-fidelity analyses in order to guide low-fidelity continuum level numerical tools such as the FEM or finite-difference method with significant savings in the computational time.


Author(s):  
David J. J. Toal

Traditional multi-fidelity surrogate models require that the output of the low fidelity model be reasonably well correlated with the high fidelity model and will only predict scalar responses. The following paper explores the potential of a novel multi-fidelity surrogate modelling scheme employing Gappy Proper Orthogonal Decomposition (G-POD) which is demonstrated to accurately predict the response of the entire computational domain thus improving optimization and uncertainty quantification performance over both traditional single and multi-fidelity surrogate modelling schemes.


Author(s):  
Roxanne A. Moore ◽  
Christiaan J. J. Paredis

Modeling, simulation, and optimization play vital roles throughout the engineering design process; however, in many design disciplines the cost of simulation is high, and designers are faced with a tradeoff between the number of alternatives that can be evaluated and the accuracy with which they are evaluated. In this paper, a methodology is presented for using models of various levels of fidelity during the optimization process. The intent is to use inexpensive, low-fidelity models with limited accuracy to recognize poor design alternatives and reserve the high-fidelity, accurate, but also expensive models only to characterize the best alternatives. Specifically, by setting a user-defined performance threshold, the optimizer can explore the design space using a low-fidelity model by default, and switch to a higher fidelity model only if the performance threshold is attained. In this manner, the high fidelity model is used only to discern the best solution from the set of good solutions, so computational resources are conserved until the optimizer is close to the solution. This makes the optimization process more efficient without sacrificing the quality of the solution. The method is illustrated by optimizing the trajectory of a hydraulic backhoe. To characterize the robustness and efficiency of the method, a design space exploration is performed using both the low and high fidelity models, and the optimization problem is solved multiple times using the variable fidelity framework.


2022 ◽  
Vol 7 (01) ◽  
pp. 31-51
Author(s):  
Tanya Peart ◽  
Nicolas Aubin ◽  
Stefano Nava ◽  
John Cater ◽  
Stuart Norris

Velocity Prediction Programs (VPPs) are commonly used to help predict and compare the performance of different sail designs. A VPP requires an aerodynamic input force matrix which can be computationally expensive to calculate, limiting its application in industrial sail design projects. The use of multi-fidelity kriging surrogate models has previously been presented by the authors to reduce this cost, with high-fidelity data for a new sail being modelled and the low-fidelity data provided by data from existing, but different, sail designs. The difference in fidelity is not due to the simulation method used to obtain the data, but instead how similar the sail’s geometry is to the new sail design. An important consideration for the construction of these models is the choice of low-fidelity data points, which provide information about the trend of the model curve between the high-fidelity data. A method is required to select the best existing sail design to use for the low-fidelity data when constructing a multi-fidelity model. The suitability of an existing sail design as a low fidelity model could be evaluated based on the similarity of its geometric parameters with the new sail. It is shown here that for upwind jib sails, the similarity of the broadseam between the two sails best indicates the ability of a design to be used as low-fidelity data for a lift coefficient surrogate model. The lift coefficient surrogate model error predicted by the regression is shown to be close to 1% of the lift coefficient surrogate error for most points. Larger discrepancies are observed for a drag coefficient surrogate error regression.


2018 ◽  
Vol 9 ◽  
pp. 117959721879025
Author(s):  
Elsje Pienaar

Rare events such as genetic mutations or cell-cell interactions are important contributors to dynamics in complex biological systems, eg, in drug-resistant infections. Computational approaches can help analyze rare events that are difficult to study experimentally. However, analyzing the frequency and dynamics of rare events in computational models can also be challenging due to high computational resource demands, especially for high-fidelity stochastic computational models. To facilitate analysis of rare events in complex biological systems, we present a multifidelity analysis approach that uses medium-fidelity analysis (Monte Carlo simulations) and/or low-fidelity analysis (Markov chain models) to analyze high-fidelity stochastic model results. Medium-fidelity analysis can produce large numbers of possible rare event trajectories for a single high-fidelity model simulation. This allows prediction of both rare event dynamics and probability distributions at much lower frequencies than high-fidelity models. Low-fidelity analysis can calculate probability distributions for rare events over time for any frequency by updating the probabilities of the rare event state space after each discrete event of the high-fidelity model. To validate the approach, we apply multifidelity analysis to a high-fidelity model of tuberculosis disease. We validate the method against high-fidelity model results and illustrate the application of multifidelity analysis in predicting rare event trajectories, performing sensitivity analyses and extrapolating predictions to very low frequencies in complex systems. We believe that our approach will complement ongoing efforts to enable accurate prediction of rare event dynamics in high-fidelity computational models.


Author(s):  
Ji Miao ◽  
Chunlin Gong ◽  
Chunna Li

Efficient aerodynamic design optimization method is of great value for improving the aerodynamic performance of little UAV's airfoil. Using engineering or semi-engineering estimation method to analyze aerodynamic forces in solving aerodynamic optimization problems costs little computational time, but the accuracy cannot be guaranteed. However, CFD method ensuring high accuracy needs much more computational cost, which is unfordable for optimization. Surrogate-based optimization can reduce the number of high-fidelity analyses to increase the optimization efficiency. However, the cost of CFD analyses is still huge for aerodynamic optimization due to multiple design variables, multi-optimal and strong nonlinearities. To solve this problem, a two-stage aerodynamic optimization method based on early termination of CFD convergence and variable-fidelity model is proposed. In the first optimization stage, the solutions by early termination CFD convergence and the convergenced CFD solutions are regarded as low-and high-fidelity data respectively for building variable-fidelity model. Then, the multi-island genetic algorithm is used in the global optimization based on the built variable-fidelity model. The modeling efficiency can be greatly improved due to many cheap low-fidelity data. In the second stage optimization, the global optimum from the first optimization stage is treated as the start of the Hooke-Jeeves algorithm to search locally based on convergenced CFD computations in order to acquire better-optimum. The proposed method is utilized in optimizing the aerodynamic performance of the airfoil of little UAV, and is compared with the EGO method based on single-fidelity Kriging surrogate model. The results show that the present two-level aerodynamic optimization method consumes less time.


2016 ◽  
Vol 809 ◽  
pp. 895-917 ◽  
Author(s):  
H. Babaee ◽  
P. Perdikaris ◽  
C. Chryssostomidis ◽  
G. E. Karniadakis

For thermal mixed-convection flows, the Nusselt number is a function of Reynolds number, Grashof number and the angle between the forced- and natural-convection directions. We consider flow over a heated cylinder for which there is no universal correlation that accurately predicts Nusselt number as a function of these parameters, especially in opposing-convection flows, where the natural convection is against the forced convection. Here, we revisit this classical problem by employing modern tools from machine learning to develop a general multi-fidelity framework for constructing a stochastic response surface for the Nusselt number. In particular, we combine previously developed experimental correlations (low-fidelity model) with direct numerical simulations (high-fidelity model) using Gaussian process regression and autoregressive stochastic schemes. In this framework the high-fidelity model is sampled only a few times, while the inexpensive empirical correlation is sampled at a very high rate. We obtain the mean Nusselt number directly from the stochastic multi-fidelity response surface, and we also propose an improved correlation. This new correlation seems to be consistent with the physics of this problem as we correct the vectorial addition of forced and natural convection with a pre-factor that weighs differently the forced convection. This, in turn, results in a new definition of the effective Reynolds number, hence accounting for the ‘incomplete similarity’ between mixed convection and forced convection. In addition, due to the probabilistic construction, we can quantify the uncertainty associated with the predictions. This information-fusion framework is useful for elucidating the physics of the flow, especially in cases where anomalous transport or interesting dynamics may be revealed by contrasting the variable fidelity across the models. While in this paper we focus on the thermal mixed convection, the multi-fidelity framework provides a new paradigm that could be used in many different contexts in fluid mechanics including heat and mass transport, but also in combining various levels of fidelity of models of turbulent flows.


Sign in / Sign up

Export Citation Format

Share Document