scholarly journals Selection of Existing Sail Designs for Multi-Fidelity Surrogate Models

2022 ◽  
Vol 7 (01) ◽  
pp. 31-51
Author(s):  
Tanya Peart ◽  
Nicolas Aubin ◽  
Stefano Nava ◽  
John Cater ◽  
Stuart Norris

Velocity Prediction Programs (VPPs) are commonly used to help predict and compare the performance of different sail designs. A VPP requires an aerodynamic input force matrix which can be computationally expensive to calculate, limiting its application in industrial sail design projects. The use of multi-fidelity kriging surrogate models has previously been presented by the authors to reduce this cost, with high-fidelity data for a new sail being modelled and the low-fidelity data provided by data from existing, but different, sail designs. The difference in fidelity is not due to the simulation method used to obtain the data, but instead how similar the sail’s geometry is to the new sail design. An important consideration for the construction of these models is the choice of low-fidelity data points, which provide information about the trend of the model curve between the high-fidelity data. A method is required to select the best existing sail design to use for the low-fidelity data when constructing a multi-fidelity model. The suitability of an existing sail design as a low fidelity model could be evaluated based on the similarity of its geometric parameters with the new sail. It is shown here that for upwind jib sails, the similarity of the broadseam between the two sails best indicates the ability of a design to be used as low-fidelity data for a lift coefficient surrogate model. The lift coefficient surrogate model error predicted by the regression is shown to be close to 1% of the lift coefficient surrogate error for most points. Larger discrepancies are observed for a drag coefficient surrogate error regression.

2018 ◽  
Vol 27 (2) ◽  
pp. 118-124 ◽  
Author(s):  
Andrei Odobescu ◽  
Isak Goodwin ◽  
Djamal Berbiche ◽  
Joseph BouMerhi ◽  
Patrick G. Harris ◽  
...  

Background: The Thiel embalmment method has recently been used in a number of medical simulation fields. The authors investigate the use of Thiel vessels as a high fidelity model for microvascular simulation and propose a new checklist-based evaluation instrument for microsurgical training. Methods: Thirteen residents and 2 attending microsurgeons performed video recorded microvascular anastomoses on Thiel embalmed arteries that were evaluated using a new evaluation instrument (Microvascular Evaluation Scale) by 4 fellowship trained microsurgeons. The internal validity was assessed using the Cronbach coefficient. The external validity was verified using regression models. Results: The reliability assessment revealed an excellent intra-class correlation of 0.89. When comparing scores obtained by participants from different levels of training, attending surgeons and senior residents (Post Graduate Year [PGY] 4-5) scored significantly better than junior residents (PGY 1-3). The difference between senior residents and attending surgeons was not significant. When considering microsurgical experience, the differences were significant between the advanced group and the minimal and moderate experience groups. The differences between minimal and moderate experience groups were not significant. Based on the data obtained, a score of 8 would translate into a level of microsurgical competence appropriate for clinical microsurgery. Conclusions: Thiel cadaveric vessels are a high fidelity model for microsurgical simulation. Excellent internal and external validity measures were obtained using the Microvascular Evaluation Scale (MVES).


2019 ◽  
Vol 9 (3) ◽  
pp. 20180083 ◽  
Author(s):  
Seungjoon Lee ◽  
Felix Dietrich ◽  
George E. Karniadakis ◽  
Ioannis G. Kevrekidis

In statistical modelling with Gaussian process regression, it has been shown that combining (few) high-fidelity data with (many) low-fidelity data can enhance prediction accuracy, compared to prediction based on the few high-fidelity data only. Such information fusion techniques for multi-fidelity data commonly approach the high-fidelity model f h ( t ) as a function of two variables ( t , s ), and then use f l ( t ) as the s data. More generally, the high-fidelity model can be written as a function of several variables ( t , s 1 , s 2 ….); the low-fidelity model f l and, say, some of its derivatives can then be substituted for these variables. In this paper, we will explore mathematical algorithms for multi-fidelity information fusion that use such an approach towards improving the representation of the high-fidelity function with only a few training data points. Given that f h may not be a simple function—and sometimes not even a function—of f l , we demonstrate that using additional functions of t , such as derivatives or shifts of f l , can drastically improve the approximation of f h through Gaussian processes. We also point out a connection with ‘embedology’ techniques from topology and dynamical systems. Our illustrative examples range from instructive caricatures to computational biology models, such as Hodgkin–Huxley neural oscillations.


2018 ◽  
Vol 2018 ◽  
pp. 1-14
Author(s):  
Hua Su ◽  
Chun-lin Gong ◽  
Liang-xian Gu

High time-consuming computation has become an obvious characteristic of the modern multidisciplinary design optimization (MDO) solving procedure. To reduce the computing cost and improve solving environment of the traditional MDO solution method, this article introduces a novel universal MDO framework based on the support of adaptive discipline surrogate model with asymptotical correction by discriminative sampling. The MDO solving procedure is decomposed into three parts: framework level, architecture level, and discipline level. Framework level controls the MDO solving procedure and carries out convergence estimation; architecture level executes the MDO solution method with discipline surrogate models; discipline level analyzes discipline models to establish adaptive discipline surrogate models based on a stochastic asymptotical sampling method. The MDO solving procedure is executed as an iterative way included with discipline surrogate model correcting, MDO solving, and discipline analyzing. These are accomplished by the iteration process control at the framework level, the MDO decomposition at the architecture level, and the discipline surrogate model update at the discipline level. The framework executes these three parts separately in a hierarchical and modularized way. The discipline models and disciplinary design point sampling process are all independent; parallel computing could be used to increase computing efficiency in parallel environment. Several MDO benchmarks are tested in this MDO framework. Results show that the number of discipline evaluations in the framework is half or less of the original MDO solution method and is very useful and suitable for the complex high-fidelity MDO problem.


Author(s):  
Daeyeon Lee ◽  
Nhu Van Nguyen ◽  
Maxim Tyan ◽  
Hyung-Geun Chun ◽  
Sangho Kim ◽  
...  

Using the global exploration and Kriging-based multi-fidelity analysis methods, this study developed a multi-fidelity aerodynamic database for use in the performance analysis of flight vehicles and for use in flight simulations. Athena vortex lattice, a program based on vortex lattice method, was used as the low-fidelity analysis tool in the multi-fidelity analysis method. The in-house high-fidelity AADL-3D code was based on the Navier–Stokes equations. The AADL-3D code was validated by comparing the data and the analysis results of the Onera M-6 wing and NACA TN 3649. The design of experiment method and the Kriging method were applied to integrate low- and high-fidelity analysis results. General data tendencies were established from the low-fidelity analysis results. The high-fidelity analysis results and the Kriging method were used to generate a surrogate model, from which the low-fidelity analysis results were interpolated. To reduce repeated calculations, three design points were simultaneously added for each calculation. The convergence of three design points was avoided by considering only the peak points as additional design points. The reliability of the final surrogate model was determined by applying the leave-one-out cross-validation method and by obtaining the cross-validation root mean square error. Using the multi-fidelity model developed in this study, a multi-fidelity aerodynamic database was constructed for use in the three degrees of freedom flight simulation of flight vehicles.


Author(s):  
Yong Hoon Lee ◽  
R. E. Corman ◽  
Randy H. Ewoldt ◽  
James T. Allison

A novel multiobjective adaptive surrogate modeling-based optimization (MO-ASMO) framework is proposed to utilize a minimal number of training samples efficiently for sequential model updates. All the sample points are enforced to be feasible, and to provide coverage of sparsely explored sparse design regions using a new optimization subproblem. The MO-ASMO method only evaluates high-fidelity functions at feasible sample points. During an exploitation sample phase, samples are selected to enhance solution accuracy rather than the global exploration. Sampling tasks are especially challenging for multiobjective optimization; for an n-dimensional design space, a strategy is required for generating model update sample points near an (n − 1)-dimensional hypersurface corresponding to the Pareto set in the design space. This is addressed here using a force-directed layout algorithm, adapted from graph visualization strategies, to distribute feasible sample points evenly near the estimated Pareto set. Model validation samples are chosen uniformly on the Pareto set hypersurface, and surrogate model estimates at these points are compared to high-fidelity model responses. All high-fidelity model evaluations are stored for later use to train an updated surrogate model. The MO-ASMO algorithm, along with the set of new sampling strategies, are tested using two mathematical and one realistic engineering problems. The second mathematical test problems is specifically designed to test the limits of this algorithm to cope with very narrow, non-convex feasible domains. It involves oscillatory objective functions, giving rise to a discontinuous set of Pareto-optimal solutions. Also, the third test problem demonstrates that the MO-ASMO algorithm can handle a practical engineering problem with more than 10 design variables and black-box simulations. The efficiency of the MO-ASMO algorithm is demonstrated by comparing the result of two mathematical problems to the results of the NSGA-II algorithm in terms of the number of high fidelity function evaluations, and is shown to reduce total function evaluations by several orders of magnitude when converging to the same Pareto sets.


Author(s):  
David J. J. Toal

Traditional multi-fidelity surrogate models require that the output of the low fidelity model be reasonably well correlated with the high fidelity model and will only predict scalar responses. The following paper explores the potential of a novel multi-fidelity surrogate modelling scheme employing Gappy Proper Orthogonal Decomposition (G-POD) which is demonstrated to accurately predict the response of the entire computational domain thus improving optimization and uncertainty quantification performance over both traditional single and multi-fidelity surrogate modelling schemes.


2019 ◽  
Vol 27 (4) ◽  
pp. 665-697 ◽  
Author(s):  
Lukáš Bajer ◽  
Zbyněk Pitra ◽  
Jakub Repický ◽  
Martin Holeňa

This article deals with Gaussian process surrogate models for the Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES)—several already existing and two by the authors recently proposed models are presented. The work discusses different variants of surrogate model exploitation and focuses on the benefits of employing the Gaussian process uncertainty prediction, especially during the selection of points for the evaluation with a surrogate model. The experimental part of the article thoroughly compares and evaluates the five presented Gaussian process surrogate and six other state-of-the-art optimizers on the COCO benchmarks. The algorithm presented in most detail, DTS-CMA-ES, which combines cheap surrogate-model predictions with the objective function evaluations in every iteration, is shown to approach the function optimum at least comparably fast and often faster than the state-of-the-art black-box optimizers for budgets of roughly 25–100 function evaluations per dimension, in 10- and less-dimensional spaces even for 25–250 evaluations per dimension.


2021 ◽  
Author(s):  
Frederick Law ◽  
Antoine J Cerfon ◽  
Benjamin Peherstorfer

Abstract In the design of stellarators, energetic particle confinement is a critical point of concern which remains challenging to study from a numerical point of view. Standard Monte Carlo analyses are highly expensive because a large number of particle trajectories need to be integrated over long time scales, and small time steps must be taken to accurately capture the features of the wide variety of trajectories. Even when they are based on guiding center trajectories, as opposed to full-orbit trajectories, these standard Monte Carlo studies are too expensive to be included in most stellarator optimization codes. We present the first multifidelity Monte Carlo scheme for accelerating the estimation of energetic particle confinement in stellarators. Our approach relies on a two-level hierarchy, in which a guiding center model serves as the high-fidelity model, and a data-driven linear interpolant is leveraged as the low-fidelity surrogate model. We apply multifidelity Monte Carlo to the study of energetic particle confinement in a 4-period quasi-helically symmetric stellarator, assessing various metrics of confinement. Stemming from the very high computational efficiency of our surrogate model as well as its sufficient correlation to the high-fidelity model, we obtain speedups of up to 10 with multifidelity Monte Carlo compared to standard Monte Carlo.


Author(s):  
Ying Xiong ◽  
Wei Chen ◽  
Kwok-Leung Tsui

Computational models with variable fidelity have been widely used in engineering design. To alleviate the computational burden, surrogate models are used for optimization without recourse to expensive high-fidelity simulations. In this work, a model fusion technique based on Bayesian Gaussian process modeling is employed to construct cheap, surrogate models to integrate information from both low-fidelity and high-fidelity models, while the interpolation uncertainty of the surrogate model due to the lack of sufficient high-fidelity simulations is quantified. In contrast to space filling, the sequential sampling of a high-fidelity simulation model in our proposed framework is objective-oriented, aiming for improving a design objective. Strategy based on periodical switching criteria is studied which is shown to be effective in guiding the sequential sampling of a high-fidelity model towards improving a design objective as well as reducing the interpolation uncertainty. A design confidence (DC) metric is proposed to serves as the stopping criterion to facilitate design decision making against the interpolation uncertainty. Numerical and engineering examples are provided to demonstrate the benefits of the proposed methodology.


2020 ◽  
Vol 7 (2) ◽  
pp. 34-41
Author(s):  
VLADIMIR NIKONOV ◽  
◽  
ANTON ZOBOV ◽  

The construction and selection of a suitable bijective function, that is, substitution, is now becoming an important applied task, particularly for building block encryption systems. Many articles have suggested using different approaches to determining the quality of substitution, but most of them are highly computationally complex. The solution of this problem will significantly expand the range of methods for constructing and analyzing scheme in information protection systems. The purpose of research is to find easily measurable characteristics of substitutions, allowing to evaluate their quality, and also measures of the proximity of a particular substitutions to a random one, or its distance from it. For this purpose, several characteristics were proposed in this work: difference and polynomial, and their mathematical expectation was found, as well as variance for the difference characteristic. This allows us to make a conclusion about its quality by comparing the result of calculating the characteristic for a particular substitution with the calculated mathematical expectation. From a computational point of view, the thesises of the article are of exceptional interest due to the simplicity of the algorithm for quantifying the quality of bijective function substitutions. By its nature, the operation of calculating the difference characteristic carries out a simple summation of integer terms in a fixed and small range. Such an operation, both in the modern and in the prospective element base, is embedded in the logic of a wide range of functional elements, especially when implementing computational actions in the optical range, or on other carriers related to the field of nanotechnology.


Sign in / Sign up

Export Citation Format

Share Document