Variable Fidelity Modeling in Closed Loop Dynamical Systems

Author(s):  
Matthew A. Williams ◽  
Andrew G. Alleyne

In the early stages of control system development, designers often require multiple iterations for purposes of validating control designs in simulation. This has the potential to make high fidelity models undesirable due to increased computational complexity and time required for simulation. As a solution, lower fidelity or simplified models are used for initial designs before controllers are tested on higher fidelity models. In the event that unmodeled dynamics cause the controller to fail when applied on a higher fidelity model, an iterative approach involving designing and validating a controller’s performance may be required. In this paper, a switched-fidelity modeling formulation for closed loop dynamical systems is proposed to reduce computational effort while maintaining elevated accuracy levels of system outputs and control inputs. The effects on computational effort and accuracy are investigated by applying the formulation to a traditional vapor compression system with high and low fidelity models of the evaporator and condenser. This sample case showed the ability of the switched fidelity framework to closely match the outputs and inputs of the high fidelity model while decreasing computational cost by 32% from the high fidelity model. For contrast, the low fidelity model decreases computational cost by 48% relative to the high fidelity model.

2021 ◽  
Author(s):  
Francesco Rizzi ◽  
Eric Parish ◽  
Patrick Blonigan ◽  
John Tencer

<p>This talk focuses on the application of projection-based reduced-order models (pROMs) to seismic elastic shear waves. Specifically, we present a method to efficiently propagate parametric uncertainties through the system using a novel formulation of the Galerkin ROM that exploits modern many-core computing nodes.</p><p>Seismic modeling and simulation is an active field of research because of its importance in understanding the generation, propagation and effects of earthquakes as well as artificial explosions. We stress two main challenges involved: (a) physical models contain a large number of parameters (e.g., anisotropic material properties, signal forms and parametrizations); and (b) simulating these systems at global scale with high-accuracy requires a large computational cost, often requiring days or weeks on a supercomputer. Advancements in computing platforms have enabled researchers to exploit high-fidelity computational models, such as highly-resolved seismic simulations, for certain types of analyses. Unfortunately, for analyses requiring many evaluations of the forward model (e.g., uncertainty quantification, engineering design), the use of high-fidelity models often remains impractical due to their high computational cost. Consequently, analysts often rely on lower-cost, lower-fidelity surrogate models for such problems.</p><p>Broadly speaking, surrogate models fall under three categories, namely (a) data fits, which construct an explicit mapping (e.g., using polynomials, Gaussian processes) from the system's parameters to the system response of interest, (b) lower-fidelity models, which simplify the high-fidelity model (e.g., by coarsening the mesh, employing a lower finite-element order, or neglecting physics), and (c) pROMs which reduce the number of degrees of freedom in the high-fidelity model by a projection process of the full-order model onto a subspace identified from high-fidelity data. The main advantage of pROMs is that they apply a projection process directly to the equations governing the high-fidelity model, thus enabling stronger guarantees (e.g., of structure preservation or of accuracy) and more accurate a posteriori error bounds.</p><p>State-of-the-art Galerkin ROM formulations express the state as a rank-1 tensor (i.e., a vector), leading to computational kernels that are memory bandwidth bound and, therefore, ill-suited for scalable performance on modern many-core and hybrid computing nodes. In this work, we introduce a reformulation, called rank-2 Galerkin, of the Galerkin ROM for linear time-invariant (LTI) dynamical systems which converts the nature of the ROM problem from memory bandwidth to compute bound, and apply it to elastic seismic shear waves in an axisymmetric domain. Specifically, we present an end-to-end demonstration of using the rank-2 Galerkin ROM in a Monte Carlo sampling study, showing that the rank-2 Galerkin ROM is 970 times more efficient than the full order model, while maintaining excellent accuracy in both the mean and statistics of the field.</p>


Author(s):  
Marco Baldan ◽  
Alexander Nikanorov ◽  
Bernard Nacke

Purpose Reliable modeling of induction hardening requires a multi-physical approach, which makes it time-consuming. In designing an induction hardening system, combining such model with an optimization technique allows managing a high number of design variables. However, this could lead to a tremendous overall computational cost. This paper aims to reduce the computational time of an optimal design problem by making use of multi-fidelity modeling and parallel computing. Design/methodology/approach In the multi-fidelity framework, the “high-fidelity” model couples the electromagnetic, thermal and metallurgical fields. It predicts the phase transformations during both the heating and cooling stages. The “low-fidelity” model is instead limited to the heating step. Its inaccuracy is counterbalanced by its cheapness, which makes it suitable for exploring the design space in optimization. Then, the use of co-Kriging allows merging information from different fidelity models and predicting good design candidates. Field evaluations of both models occur in parallel. Findings In the design of an induction heating system, the synergy between the “high-fidelity” and “low-fidelity” model, together with use of surrogates and parallel computing could reduce up to one order of magnitude the overall computational cost. Practical implications On one hand, multi-physical modeling of induction hardening implies a better understanding of the process, resulting in further potential process improvements. On the other hand, the optimization technique could be applied to many other computationally intensive real-life problems. Originality/value This paper highlights how parallel multi-fidelity optimization could be used in designing an induction hardening system.


2016 ◽  
Vol 33 (4) ◽  
pp. 1095-1113 ◽  
Author(s):  
Slawomir Koziel ◽  
Adrian Bekasiewicz

Purpose – The purpose of this paper is to investigate strategies for expedited dimension scaling of electromagnetic (EM)-simulated microwave and antenna structures, exploiting the concept of variable-fidelity inverse surrogate modeling. Design/methodology/approach – A fast inverse surrogate modeling technique is described for dimension scaling of microwave and antenna structures. The model is established using reference designs obtained for cheap underlying low-fidelity model and corrected to allow structure scaling at high accuracy level. Numerical and experimental case studies are provided demonstrating feasibility of the proposed approach. Findings – It is possible, by appropriate combination of surrogate modeling techniques, to establish an inverse model for explicit determination of geometry dimensions of the structure at hand so as to re-design it for various operating frequencies. The scaling process can be concluded at a low computational cost corresponding to just a few evaluations of the high-fidelity computational model of the structure. Research limitations/implications – The present study is a step toward development of procedures for rapid dimension scaling of microwave and antenna structures at high-fidelity EM-simulation accuracy. Originality/value – The proposed modeling framework proved useful for fast geometry scaling of microwave and antenna structures, which is very laborious when using conventional methods. To the authors’ knowledge, this is one of the first attempts to surrogate-assisted dimension scaling of microwave components at the EM-simulation level.


Author(s):  
Gilberto Meji´a Rodri´guez ◽  
John E. Renaud ◽  
Vikas Tomar

Research applications involving design tool development for multiple phase material design are at an early stage of development. The computational requirements of advanced numerical tools for simulating material behavior such as the finite element method (FEM) and the molecular dynamics method (MD) can prohibit direct integration of these tools in a design optimization procedure where multiple iterations are required. The complexity of multiphase material behavior at multiple scales restricts the development of a comprehensive meta-model that can be used to replace the multiscale analysis. One, therefore, requires a design approach that can incorporate multiple simulations (multi-physics) of varying fidelity such as FEM and MD in an iterative model management framework that can significantly reduce design cycle times. In this research a material design tool based on a variable fidelity model management framework is presented. In the variable fidelity material design tool, complex “high fidelity” FEM analyses are performed only to guide the analytic “low-fidelity” model toward the optimal material design. The tool is applied to obtain the optimal distribution of a second phase, consisting of silicon carbide (SiC) fibers, in a silicon-nitride (Si3N4) matrix to obtain continuous fiber SiC-Si3N4 ceramic composites (CFCCs) with optimal fracture toughness. Using the variable fidelity material design tool in application to one test problem, a reduction in design cycle time around 80 percent is achieved as compared to using a conventional design optimization approach that exclusively calls the high fidelity FEM.


2012 ◽  
Vol 544 ◽  
pp. 49-54 ◽  
Author(s):  
Jun Zheng ◽  
Hao Bo Qiu ◽  
Xiao Lin Zhang

ATC provides a systematic approach in solving decomposed large scale systems that has solvable subsystems. However, complex engineering system usually has a high computational cost , which result in limiting real-life applications of ATC based on high-fidelity simulation models. To address these problems, this paper aims to develop an efficient approximation model building techniques under the analytical target cascading (ATC) framework, to reduce computational cost associated with multidisciplinary design optimization problems based on high-fidelity simulations. An approximation model building techniques is proposed: approximations in the subsystem level are based on variable-fidelity modeling (interaction of low- and high-fidelity models). The variable-fidelity modeling consists of computationally efficient simplified models (low-fidelity) and expensive detailed (high-fidelity) models. The effectiveness of the method for modeling under the ATC framework using variable-fidelity models is studied. Overall results show the methods introduced in this paper provide an effective way of improving computational efficiency of the ATC method based on variable-fidelity simulation models.


Aerospace ◽  
2021 ◽  
Vol 8 (12) ◽  
pp. 398
Author(s):  
Angelos Kafkas ◽  
Spyridon Kilimtzidis ◽  
Athanasios Kotzakolios ◽  
Vassilis Kostopoulos ◽  
George Lampeas

Efficient optimization is a prerequisite to realize the full potential of an aeronautical structure. The success of an optimization framework is predominately influenced by the ability to capture all relevant physics. Furthermore, high computational efficiency allows a greater number of runs during the design optimization process to support decision-making. The efficiency can be improved by the selection of highly optimized algorithms and by reducing the dimensionality of the optimization problem by formulating it using a finite number of significant parameters. A plethora of variable-fidelity tools, dictated by each design stage, are commonly used, ranging from costly high-fidelity to low-cost, low-fidelity methods. Unfortunately, despite rapid solution times, an optimization framework utilizing low-fidelity tools does not necessarily capture the physical problem accurately. At the same time, high-fidelity solution methods incur a very high computational cost. Aiming to bridge the gap and combine the best of both worlds, a multi-fidelity optimization framework was constructed in this research paper. In our approach, the low-fidelity modules and especially the equivalent-plate methodology structural representation, capable of drastically reducing the associated computational time, form the backbone of the optimization framework and a MIDACO optimizer is tasked with providing an initial optimized design. The higher fidelity modules are then employed to explore possible further gains in performance. The developed framework was applied to a benchmark airliner wing. As demonstrated, reasonable mass reduction was obtained for a current state of the art configuration.


Proceedings ◽  
2019 ◽  
Vol 33 (1) ◽  
pp. 24 ◽  
Author(s):  
Sascha Ranftl ◽  
Gian Marco Melito ◽  
Vahid Badeli ◽  
Alice Reinbacher-Köstinger ◽  
Katrin Ellermann ◽  
...  

Aortic dissection is a cardiovascular disease with a disconcertingly high mortality. When it comes to diagnosis, medical imaging techniques such as Computed Tomography, Magnetic Resonance Tomography or Ultrasound certainly do the job, but also have their shortcomings. Impedance cardiography is a standard method to monitor a patients heart function and circulatory system by injecting electric currents and measuring voltage drops between electrode pairs attached to the human body. If such measurements distinguished healthy from dissected aortas, one could improve clinical procedures. Experiments are quite difficult, and thus we investigate the feasibility with finite element simulations beforehand. In these simulations, we find uncertain input parameters, e.g., the electrical conductivity of blood. Inference on the state of the aorta from impedance measurements defines an inverse problem in which forward uncertainty propagation through the simulation with vanilla Monte Carlo demands a prohibitively large computational effort. To overcome this limitation, we combine two simulations: one simulation with a high fidelity and another simulation with a low fidelity, and low and high computational costs accordingly. We use the inexpensive low-fidelity simulation to learn about the expensive high-fidelity simulation. It all boils down to a regression problem—and reduces total computational cost after all.


2008 ◽  
Vol 130 (9) ◽  
Author(s):  
Gilberto Mejía-Rodríguez ◽  
John E. Renaud ◽  
Vikas Tomar

Research applications involving design tool development for multi phase material design are at an early stage of development. The computational requirements of advanced numerical tools for simulating material behavior such as the finite element method (FEM) and the molecular dynamics (MD) method can prohibit direct integration of these tools in a design optimization procedure where multiple iterations are required. One, therefore, requires a design approach that can incorporate multiple simulations (multiphysics) of varying fidelity such as FEM and MD in an iterative model management framework that can significantly reduce design cycle times. In this research a material design tool based on a variable fidelity model management framework is presented. In the variable fidelity material design tool, complex “high-fidelity” FEM analyses are performed only to guide the analytic “low-fidelity” model toward the optimal material design. The tool is applied to obtain the optimal distribution of a second phase, consisting of silicon carbide (SiC) fibers, in a silicon-nitride (Si3N4) matrix to obtain continuous fiber SiC–Si3N4 ceramic composites with optimal fracture toughness. Using the variable fidelity material design tool in application to two test problems, a reduction in design cycle times of between 40% and 80% is achieved as compared to using a conventional design optimization approach that exclusively calls the high-fidelity FEM. The optimal design obtained using the variable fidelity approach is the same as that obtained using the conventional procedure. The variable fidelity material design tool is extensible to multiscale multiphase material design by using MD based material performance analyses as the high-fidelity analyses in order to guide low-fidelity continuum level numerical tools such as the FEM or finite-difference method with significant savings in the computational time.


Author(s):  
Alireza Doostan ◽  
Gianluca Geraci ◽  
Gianluca Iaccarino

This paper presents a bi-fidelity simulation approach to quantify the effect of uncertainty in the thermal boundary condition on the heat transfer in a ribbed channel. A numerical test case is designed where a random heat flux at the wall of a rectangular channel is applied to mimic the unknown temperature distribution in a realistic application. To predict the temperature distribution and the associated uncertainty over the channel wall, the fluid flow is simulated using 2D periodic steady Reynolds-Averaged Navier-Stokes (RANS) equations. The goal of this study is then to illustrate that the cost of propagating the heat flux uncertainty may be significantly reduced when two RANS models with different levels of fidelity, one low (cheap to simulate) and one high (expensive to evaluate), are used. The low-fidelity model is employed to learn a reduced basis and an interpolation rule that can be used, along with a small number of high-fidelity model evaluations, to approximate the high-fidelity solution at arbitrary samples of heat flux. Here, the low- and high-fidelity models are, respectively, the one-equation Spalart-Allmaras and the two-equation shear stress transport k–ω models. To further reduce the computational cost, the Spalart-Allmaras model is simulated on a coarser spatial grid and the non-linear solver is terminated prior to the solution convergence. It is illustrated that the proposed bi-fidelity strategy accurately approximates the target high-fidelity solution at randomly selected samples of the uncertain heat flux.


2020 ◽  
Vol 143 (2) ◽  
Author(s):  
X. Zhao ◽  
S. Azarm ◽  
B. Balachandran

Abstract Predicting the behavior or response for complicated dynamical systems during their operation may require high-fidelity and computationally costly simulations. Because of the high computational cost, such simulations are generally done offline. The offline simulation data can then be combined with sensors measurement data for online, operational prediction of the system's behavior. In this paper, a generic online data-driven approach is proposed for the prediction of spatio-temporal behavior of dynamical systems using their simulation data combined with sparse, noisy sensors measurement data. The approach relies on an offline–online approach and is based on an integration of dimension reduction, surrogate modeling, and data assimilation techniques. A step-by-step application of the proposed approach is demonstrated by a simple numerical example. The performance of the approach is also evaluated by a case study which involves predicting aeroelastic response of a joined-wing aircraft in which sensors are sparsely placed on its wing. Through this case study, it is shown that the results obtained from the proposed spatio-temporal prediction technique have comparable accuracy to those from the high-fidelity simulation, while at the same time significant reduction in computational expense is achieved. It is also shown that, for the case study, the proposed approach has a prediction accuracy that is relatively robust to the sensors’ locations.


Sign in / Sign up

Export Citation Format

Share Document