Control Variate Multifidelity Estimators for the Variance and Sensitivity Analysis of Mesostructure–Structure Systems

Author(s):  
Hongyi Xu ◽  
Zhao Liu

Variance and sensitivity analysis are challenging tasks when the evaluation of system performances incurs a high-computational cost. To resolve this issue, this paper investigates several multifidelity statistical estimators for the responses of complex systems, especially the mesostructure–structure system manufactured by additive manufacturing. First, this paper reviews an established control variate multifidelity estimator, which leverages the output of an inexpensive, low-fidelity model and the correlation between the high-fidelity model and the low-fidelity model to predict the statistics of the system responses. Second, we investigate several variants of the original estimator and propose a new formulation of the control variate estimator. All these estimators and the associated sensitivity analysis approaches are compared on two engineering examples of mesostructure–structure system analysis. A multifidelity metamodel-based sensitivity analysis approach is also included in the comparative study. The proposed estimator demonstrates its strength in predicting variance when only a limited number of expensive high-fidelity data are available. Finally, the pros and cons of each estimator are discussed, and recommendations are made on the selection of multifidelity estimators for variance and sensitivity analysis.

2021 ◽  
Author(s):  
Francesco Rizzi ◽  
Eric Parish ◽  
Patrick Blonigan ◽  
John Tencer

<p>This talk focuses on the application of projection-based reduced-order models (pROMs) to seismic elastic shear waves. Specifically, we present a method to efficiently propagate parametric uncertainties through the system using a novel formulation of the Galerkin ROM that exploits modern many-core computing nodes.</p><p>Seismic modeling and simulation is an active field of research because of its importance in understanding the generation, propagation and effects of earthquakes as well as artificial explosions. We stress two main challenges involved: (a) physical models contain a large number of parameters (e.g., anisotropic material properties, signal forms and parametrizations); and (b) simulating these systems at global scale with high-accuracy requires a large computational cost, often requiring days or weeks on a supercomputer. Advancements in computing platforms have enabled researchers to exploit high-fidelity computational models, such as highly-resolved seismic simulations, for certain types of analyses. Unfortunately, for analyses requiring many evaluations of the forward model (e.g., uncertainty quantification, engineering design), the use of high-fidelity models often remains impractical due to their high computational cost. Consequently, analysts often rely on lower-cost, lower-fidelity surrogate models for such problems.</p><p>Broadly speaking, surrogate models fall under three categories, namely (a) data fits, which construct an explicit mapping (e.g., using polynomials, Gaussian processes) from the system's parameters to the system response of interest, (b) lower-fidelity models, which simplify the high-fidelity model (e.g., by coarsening the mesh, employing a lower finite-element order, or neglecting physics), and (c) pROMs which reduce the number of degrees of freedom in the high-fidelity model by a projection process of the full-order model onto a subspace identified from high-fidelity data. The main advantage of pROMs is that they apply a projection process directly to the equations governing the high-fidelity model, thus enabling stronger guarantees (e.g., of structure preservation or of accuracy) and more accurate a posteriori error bounds.</p><p>State-of-the-art Galerkin ROM formulations express the state as a rank-1 tensor (i.e., a vector), leading to computational kernels that are memory bandwidth bound and, therefore, ill-suited for scalable performance on modern many-core and hybrid computing nodes. In this work, we introduce a reformulation, called rank-2 Galerkin, of the Galerkin ROM for linear time-invariant (LTI) dynamical systems which converts the nature of the ROM problem from memory bandwidth to compute bound, and apply it to elastic seismic shear waves in an axisymmetric domain. Specifically, we present an end-to-end demonstration of using the rank-2 Galerkin ROM in a Monte Carlo sampling study, showing that the rank-2 Galerkin ROM is 970 times more efficient than the full order model, while maintaining excellent accuracy in both the mean and statistics of the field.</p>


Author(s):  
Li Wang ◽  
Boris Diskin ◽  
Leonard V. Lopes ◽  
Eric J. Nielsen ◽  
Elizabeth Lee-Rausch ◽  
...  

A high-fidelity multidisciplinary analysis and gradient-based optimization tool for rotorcraft aero-acoustics is presented. Tightly coupled discipline models include physics-based computational fluid dynamics, rotorcraft comprehensive analysis, and noise prediction and propagation. A discretely consistent adjoint methodology accounts for sensitivities of unsteady flows and unstructured, dynamically deforming, overset grids. The sensitivities of structural responses to blade aerodynamic loads are computed using a complex-variable approach. Sensitivities of acoustic metrics are computed by chain-rule differentiation. Interfaces are developed for interactions between the discipline models for rotorcraft aeroacoustic analysis and the integrated sensitivity analysis. The multidisciplinary sensitivity analysis is verified through a complex-variable approach. To verify functionality of the multidisciplinary analysis and optimization tool, an optimization problem for a 40% Mach-scaled HART-II rotor-and-fuselage configuration is crafted with the objective of reducing thickness noise subject to aerodynamic and geometric constraints. The optimized configuration achieves a noticeable noise reduction, satisfies all required constraints, and produces thinner blades as expected. Computational cost of the optimization cycle is assessed in a high-performance computing environment and found to be acceptable for design of rotorcraft in general level-flight conditions.


Author(s):  
Marco Baldan ◽  
Alexander Nikanorov ◽  
Bernard Nacke

Purpose Reliable modeling of induction hardening requires a multi-physical approach, which makes it time-consuming. In designing an induction hardening system, combining such model with an optimization technique allows managing a high number of design variables. However, this could lead to a tremendous overall computational cost. This paper aims to reduce the computational time of an optimal design problem by making use of multi-fidelity modeling and parallel computing. Design/methodology/approach In the multi-fidelity framework, the “high-fidelity” model couples the electromagnetic, thermal and metallurgical fields. It predicts the phase transformations during both the heating and cooling stages. The “low-fidelity” model is instead limited to the heating step. Its inaccuracy is counterbalanced by its cheapness, which makes it suitable for exploring the design space in optimization. Then, the use of co-Kriging allows merging information from different fidelity models and predicting good design candidates. Field evaluations of both models occur in parallel. Findings In the design of an induction heating system, the synergy between the “high-fidelity” and “low-fidelity” model, together with use of surrogates and parallel computing could reduce up to one order of magnitude the overall computational cost. Practical implications On one hand, multi-physical modeling of induction hardening implies a better understanding of the process, resulting in further potential process improvements. On the other hand, the optimization technique could be applied to many other computationally intensive real-life problems. Originality/value This paper highlights how parallel multi-fidelity optimization could be used in designing an induction hardening system.


Author(s):  
Matthew A. Williams ◽  
Andrew G. Alleyne

In the early stages of control system development, designers often require multiple iterations for purposes of validating control designs in simulation. This has the potential to make high fidelity models undesirable due to increased computational complexity and time required for simulation. As a solution, lower fidelity or simplified models are used for initial designs before controllers are tested on higher fidelity models. In the event that unmodeled dynamics cause the controller to fail when applied on a higher fidelity model, an iterative approach involving designing and validating a controller’s performance may be required. In this paper, a switched-fidelity modeling formulation for closed loop dynamical systems is proposed to reduce computational effort while maintaining elevated accuracy levels of system outputs and control inputs. The effects on computational effort and accuracy are investigated by applying the formulation to a traditional vapor compression system with high and low fidelity models of the evaporator and condenser. This sample case showed the ability of the switched fidelity framework to closely match the outputs and inputs of the high fidelity model while decreasing computational cost by 32% from the high fidelity model. For contrast, the low fidelity model decreases computational cost by 48% relative to the high fidelity model.


Author(s):  
Antonio C. Bertolino ◽  
Giovanni Jacazio ◽  
Stefano Mauro ◽  
Massimo Sorli

Over the past years, a trend toward “more electric” equipment has arisen including flight control systems, leading to a tendency to replace the electro-hydraulic actuators (EHSAs) with electro-mechanical actuators (EMAs), which have however a too high jamming probability for a primary flight control system. An innovative jam tolerant approach is to make the EMA “jam-predictive” by monitoring its health state using effective prognostic algorithms. The need for a high-fidelity model is then paramount. In this study, basing on a typical architecture of an EMA, a detailed analysis of the developed dynamic non-linear ball screw model is presented. The backlash, friction parameters, a model of the rolling/sliding behaviour of a ball with rolling friction are taken into account, contact stiffness and preload are introduced. A discussion is presented on the results of a sensitivity analysis on the efficiency of the mechanism with respect to the above mentioned characteristic parameters under different operating conditions. The model and the results of the sensitivity analysis can be used to better understand the physics within the actuator and the ensuing fault-to-failure mechanisms which are needed for developing more efficient prognostic algorithms.


Author(s):  
Alireza Doostan ◽  
Gianluca Geraci ◽  
Gianluca Iaccarino

This paper presents a bi-fidelity simulation approach to quantify the effect of uncertainty in the thermal boundary condition on the heat transfer in a ribbed channel. A numerical test case is designed where a random heat flux at the wall of a rectangular channel is applied to mimic the unknown temperature distribution in a realistic application. To predict the temperature distribution and the associated uncertainty over the channel wall, the fluid flow is simulated using 2D periodic steady Reynolds-Averaged Navier-Stokes (RANS) equations. The goal of this study is then to illustrate that the cost of propagating the heat flux uncertainty may be significantly reduced when two RANS models with different levels of fidelity, one low (cheap to simulate) and one high (expensive to evaluate), are used. The low-fidelity model is employed to learn a reduced basis and an interpolation rule that can be used, along with a small number of high-fidelity model evaluations, to approximate the high-fidelity solution at arbitrary samples of heat flux. Here, the low- and high-fidelity models are, respectively, the one-equation Spalart-Allmaras and the two-equation shear stress transport k–ω models. To further reduce the computational cost, the Spalart-Allmaras model is simulated on a coarser spatial grid and the non-linear solver is terminated prior to the solution convergence. It is illustrated that the proposed bi-fidelity strategy accurately approximates the target high-fidelity solution at randomly selected samples of the uncertain heat flux.


2011 ◽  
Vol 201-203 ◽  
pp. 1209-1212 ◽  
Author(s):  
Liang Yu Zhao ◽  
Xia Qing Zhang

A practical flapping wing micro aerial vehicle should have ability to withstand stochastic deviations of flight velocities. The responses of the time-averaged thrust coefficient and the propulsive efficiency with respect to a stochastic flight velocity deviation under Gauss distribution were numerically investigated using a classic Monte Carlo method. The response surface method was employed to surrogate the high fidelity model to save computational cost. It is observed that both of the time-averaged thrust coefficient and the propulsive efficiency obey a Gauss-like but not the exact Gauss distribution. The effect caused by the velocity deviation on the time-averaged thrust coefficient is larger than the one on the propulsive efficiency.


2019 ◽  
Vol 64 (3) ◽  
pp. 1-11 ◽  
Author(s):  
Li Wang ◽  
Boris Diskin ◽  
Robert T. Biedron ◽  
Eric J. Nielsen ◽  
Valentin Sonneville ◽  
...  

A multidisciplinary design optimization procedure has been developed and applied to rotorcraft simulations involving tightly coupled, high-fidelity computational fluid dynamics and comprehensive analysis. A discretely consistent, adjoint-based sensitivity analysis available in the fluid dynamics solver provides sensitivities arising from unsteady turbulent flows on unstructured, dynamic, overset meshes, whereas a complex-variable approach is used to compute structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Accuracy of the coupled system for high-fidelity rotorcraft analysis is verified; simulation results exhibit good agreement with established solutions. A constrained gradient-based design optimization for a HART-II rotorcraft configuration is demonstrated. The computational cost for individual components of the multidisciplinary sensitivity analysis is assessed and improved.


2020 ◽  
Author(s):  
Ali Raza ◽  
Arni Sturluson ◽  
Cory Simon ◽  
Xiaoli Fern

Virtual screenings can accelerate and reduce the cost of discovering metal-organic frameworks (MOFs) for their applications in gas storage, separation, and sensing. In molecular simulations of gas adsorption/diffusion in MOFs, the adsorbate-MOF electrostatic interaction is typically modeled by placing partial point charges on the atoms of the MOF. For the virtual screening of large libraries of MOFs, it is critical to develop computationally inexpensive methods to assign atomic partial charges to MOFs that accurately reproduce the electrostatic potential in their pores. Herein, we design and train a message passing neural network (MPNN) to predict the atomic partial charges on MOFs under a charge neutral constraint. A set of ca. 2,250 MOFs labeled with high-fidelity partial charges, derived from periodic electronic structure calculations, serves as training examples. In an end-to-end manner, from charge-labeled crystal graphs representing MOFs, our MPNN machine-learns features of the local bonding environments of the atoms and learns to predict partial atomic charges from these features. Our trained MPNN assigns high-fidelity partial point charges to MOFs with orders of magnitude lower computational cost than electronic structure calculations. To enhance the accuracy of virtual screenings of large libraries of MOFs for their adsorption-based applications, we make our trained MPNN model and MPNN-charge-assigned computation-ready, experimental MOF structures publicly available.<br>


Author(s):  
Wei Zhang ◽  
Saad Ahmed ◽  
Jonathan Hong ◽  
Zoubeida Ounaies ◽  
Mary Frecker

Different types of active materials have been used to actuate origami-inspired self-folding structures. To model the highly nonlinear deformation and material responses, as well as the coupled field equations and boundary conditions of such structures, high-fidelity models such as finite element (FE) models are needed but usually computationally expensive, which makes optimization intractable. In this paper, a computationally efficient two-stage optimization framework is developed as a systematic method for the multi-objective designs of such multifield self-folding structures where the deformations are concentrated in crease-like areas, active and passive materials are assumed to behave linearly, and low- and high-fidelity models of the structures can be developed. In Stage 1, low-fidelity models are used to determine the topology of the structure. At the end of Stage 1, a distance measure [Formula: see text] is applied as the metric to determine the best design, which then serves as the baseline design in Stage 2. In Stage 2, designs are further optimized from the baseline design with greatly reduced computing time compared to a full FEA-based topology optimization. The design framework is first described in a general formulation. To demonstrate its efficacy, this framework is implemented in two case studies, namely, a three-finger soft gripper actuated using a PVDF-based terpolymer, and a 3D multifield example actuated using both the terpolymer and a magneto-active elastomer, where the key steps are elaborated in detail, including the variable filter, metrics to select the best design, determination of design domains, and material conversion methods from low- to high-fidelity models. In this paper, analytical models and rigid body dynamic models are developed as the low-fidelity models for the terpolymer- and MAE-based actuations, respectively, and the FE model of the MAE-based actuation is generalized from previous work. Additional generalizable techniques to further reduce the computational cost are elaborated. As a result, designs with better overall performance than the baseline design were achieved at the end of Stage 2 with computing times of 15 days for the gripper and 9 days for the multifield example, which would rather be over 3 and 2 months for full FEA-based optimizations, respectively. Tradeoffs between the competing design objectives were achieved. In both case studies, the efficacy and computational efficiency of the two-stage optimization framework are successfully demonstrated.


Sign in / Sign up

Export Citation Format

Share Document