Efficient Uncertainty Propagation for High-Fidelity Simulations With Large Parameter Spaces: Application to Stiffened Plate Buckling

Author(s):  
Ken Nahshon ◽  
Nicholas Reynolds ◽  
Michael D. Shields

Uncertainty quantification (UQ) and propagation are critical to the computational assessment of structural components and systems. In this work, we discuss the practical challenges of implementing uncertainty quantification for high-dimensional computational structural investigations, specifically identifying four major challenges: (1) Computational cost; (2) Integration of engineering expertise; (3) Quantification of epistemic and model-form uncertainties; and (4) Need for V&V, standards, and automation. To address these challenges, we propose an approach that is straightforward for analysts to implement, mathematically rigorous, exploits analysts' subject matter expertise, and is readily automated. The proposed approach utilizes the Latinized partially stratified sampling (LPSS) method to conduct small sample Monte Carlo simulations. A simplified model is employed and analyst expertise is leveraged to cheaply investigate the best LPSS design for the structural model. Convergence results from the simplified model are then used to design an efficient LPSS-based uncertainty study for the high-fidelity computational model investigation. The methodology is carried out to investigate the buckling strength of a typical marine stiffened plate structure with material variability and geometric imperfections.

2021 ◽  
Author(s):  
Francesco Rizzi ◽  
Eric Parish ◽  
Patrick Blonigan ◽  
John Tencer

<p>This talk focuses on the application of projection-based reduced-order models (pROMs) to seismic elastic shear waves. Specifically, we present a method to efficiently propagate parametric uncertainties through the system using a novel formulation of the Galerkin ROM that exploits modern many-core computing nodes.</p><p>Seismic modeling and simulation is an active field of research because of its importance in understanding the generation, propagation and effects of earthquakes as well as artificial explosions. We stress two main challenges involved: (a) physical models contain a large number of parameters (e.g., anisotropic material properties, signal forms and parametrizations); and (b) simulating these systems at global scale with high-accuracy requires a large computational cost, often requiring days or weeks on a supercomputer. Advancements in computing platforms have enabled researchers to exploit high-fidelity computational models, such as highly-resolved seismic simulations, for certain types of analyses. Unfortunately, for analyses requiring many evaluations of the forward model (e.g., uncertainty quantification, engineering design), the use of high-fidelity models often remains impractical due to their high computational cost. Consequently, analysts often rely on lower-cost, lower-fidelity surrogate models for such problems.</p><p>Broadly speaking, surrogate models fall under three categories, namely (a) data fits, which construct an explicit mapping (e.g., using polynomials, Gaussian processes) from the system's parameters to the system response of interest, (b) lower-fidelity models, which simplify the high-fidelity model (e.g., by coarsening the mesh, employing a lower finite-element order, or neglecting physics), and (c) pROMs which reduce the number of degrees of freedom in the high-fidelity model by a projection process of the full-order model onto a subspace identified from high-fidelity data. The main advantage of pROMs is that they apply a projection process directly to the equations governing the high-fidelity model, thus enabling stronger guarantees (e.g., of structure preservation or of accuracy) and more accurate a posteriori error bounds.</p><p>State-of-the-art Galerkin ROM formulations express the state as a rank-1 tensor (i.e., a vector), leading to computational kernels that are memory bandwidth bound and, therefore, ill-suited for scalable performance on modern many-core and hybrid computing nodes. In this work, we introduce a reformulation, called rank-2 Galerkin, of the Galerkin ROM for linear time-invariant (LTI) dynamical systems which converts the nature of the ROM problem from memory bandwidth to compute bound, and apply it to elastic seismic shear waves in an axisymmetric domain. Specifically, we present an end-to-end demonstration of using the rank-2 Galerkin ROM in a Monte Carlo sampling study, showing that the rank-2 Galerkin ROM is 970 times more efficient than the full order model, while maintaining excellent accuracy in both the mean and statistics of the field.</p>


Author(s):  
Kai Zhou ◽  
Pei Cao ◽  
Jiong Tang

Uncertainty quantification is an important aspect in structural dynamic analysis. Since practical structures are complex and oftentimes need to be characterized by large-scale finite element models, component mode synthesis (CMS) method is widely adopted for order-reduced modeling. Even with the model order-reduction, the computational cost for uncertainty quantification can still be prohibitive. In this research, we utilize a two-level Gaussian process emulation to achieve rapid sampling and response prediction under uncertainty, in which the low- and high-fidelity data extracted from CMS and full-scale finite element model are incorporated in an integral manner. The possible bias of low-fidelity data is then corrected through high-fidelity data. For the purpose of reducing the emulation runs, we further employ Bayesian inference approach to calibrate the order-reduced model in a probabilistic manner conditioned on multiple predicted response distributions of concern. Case studies are carried out to validate the effectiveness of proposed methodology.


2020 ◽  
Author(s):  
Ali Raza ◽  
Arni Sturluson ◽  
Cory Simon ◽  
Xiaoli Fern

Virtual screenings can accelerate and reduce the cost of discovering metal-organic frameworks (MOFs) for their applications in gas storage, separation, and sensing. In molecular simulations of gas adsorption/diffusion in MOFs, the adsorbate-MOF electrostatic interaction is typically modeled by placing partial point charges on the atoms of the MOF. For the virtual screening of large libraries of MOFs, it is critical to develop computationally inexpensive methods to assign atomic partial charges to MOFs that accurately reproduce the electrostatic potential in their pores. Herein, we design and train a message passing neural network (MPNN) to predict the atomic partial charges on MOFs under a charge neutral constraint. A set of ca. 2,250 MOFs labeled with high-fidelity partial charges, derived from periodic electronic structure calculations, serves as training examples. In an end-to-end manner, from charge-labeled crystal graphs representing MOFs, our MPNN machine-learns features of the local bonding environments of the atoms and learns to predict partial atomic charges from these features. Our trained MPNN assigns high-fidelity partial point charges to MOFs with orders of magnitude lower computational cost than electronic structure calculations. To enhance the accuracy of virtual screenings of large libraries of MOFs for their adsorption-based applications, we make our trained MPNN model and MPNN-charge-assigned computation-ready, experimental MOF structures publicly available.<br>


Water ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 87
Author(s):  
Yongqiang Wang ◽  
Ye Liu ◽  
Xiaoyi Ma

The numerical simulation of the optimal design of gravity dams is computationally expensive. Therefore, a new optimization procedure is presented in this study to reduce the computational cost for determining the optimal shape of a gravity dam. Optimization was performed using a combination of the genetic algorithm (GA) and an updated Kriging surrogate model (UKSM). First, a Kriging surrogate model (KSM) was constructed with a small sample set. Second, the minimizing the predictor strategy was used to add samples in the region of interest to update the KSM in each updating cycle until the optimization process converged. Third, an existing gravity dam was used to demonstrate the effectiveness of the GA–UKSM. The solution obtained with the GA–UKSM was compared with that obtained using the GA–KSM. The results revealed that the GA–UKSM required only 7.53% of the total number of numerical simulations required by the GA–KSM to achieve similar optimization results. Thus, the GA–UKSM can significantly improve the computational efficiency. The method adopted in this study can be used as a reference for the optimization of the design of gravity dams.


Author(s):  
Wei Zhang ◽  
Saad Ahmed ◽  
Jonathan Hong ◽  
Zoubeida Ounaies ◽  
Mary Frecker

Different types of active materials have been used to actuate origami-inspired self-folding structures. To model the highly nonlinear deformation and material responses, as well as the coupled field equations and boundary conditions of such structures, high-fidelity models such as finite element (FE) models are needed but usually computationally expensive, which makes optimization intractable. In this paper, a computationally efficient two-stage optimization framework is developed as a systematic method for the multi-objective designs of such multifield self-folding structures where the deformations are concentrated in crease-like areas, active and passive materials are assumed to behave linearly, and low- and high-fidelity models of the structures can be developed. In Stage 1, low-fidelity models are used to determine the topology of the structure. At the end of Stage 1, a distance measure [Formula: see text] is applied as the metric to determine the best design, which then serves as the baseline design in Stage 2. In Stage 2, designs are further optimized from the baseline design with greatly reduced computing time compared to a full FEA-based topology optimization. The design framework is first described in a general formulation. To demonstrate its efficacy, this framework is implemented in two case studies, namely, a three-finger soft gripper actuated using a PVDF-based terpolymer, and a 3D multifield example actuated using both the terpolymer and a magneto-active elastomer, where the key steps are elaborated in detail, including the variable filter, metrics to select the best design, determination of design domains, and material conversion methods from low- to high-fidelity models. In this paper, analytical models and rigid body dynamic models are developed as the low-fidelity models for the terpolymer- and MAE-based actuations, respectively, and the FE model of the MAE-based actuation is generalized from previous work. Additional generalizable techniques to further reduce the computational cost are elaborated. As a result, designs with better overall performance than the baseline design were achieved at the end of Stage 2 with computing times of 15 days for the gripper and 9 days for the multifield example, which would rather be over 3 and 2 months for full FEA-based optimizations, respectively. Tradeoffs between the competing design objectives were achieved. In both case studies, the efficacy and computational efficiency of the two-stage optimization framework are successfully demonstrated.


Energies ◽  
2020 ◽  
Vol 14 (1) ◽  
pp. 118
Author(s):  
Feng Zhu ◽  
Runzhou Zhou ◽  
David J. Sypeck

In this work, a computational study was carried out to simulate crushing tests on lithium-ion vehicle battery modules. The tests were performed on commercial battery modules subject to wedge cutting at low speeds. Based on loading and boundary conditions in the tests, finite element (FE) models were developed using explicit FEA code LS-DYNA. The model predictions demonstrated a good agreement in terms of structural failure modes and force–displacement responses at both cell and module levels. The model was extended to study additional loading conditions such as indentation by a cylinder and a rectangular block. The effect of other module components such as the cover and cooling plates was analyzed, and the results have the potential for improving battery module safety design. Based on the detailed FE model, to reduce its computational cost, a simplified model was developed by representing the battery module with a homogeneous material law. Then, all three scenarios were simulated, and the results show that this simplified model can reasonably predict the short circuit initiation of the battery module.


Author(s):  
Alessandra Cuneo ◽  
Alberto Traverso ◽  
Shahrokh Shahpar

In engineering design, uncertainty is inevitable and can cause a significant deviation in the performance of a system. Uncertainty in input parameters can be categorized into two groups: aleatory and epistemic uncertainty. The work presented here is focused on aleatory uncertainty, which can cause natural, unpredictable and uncontrollable variations in performance of the system under study. Such uncertainty can be quantified using statistical methods, but the main obstacle is often the computational cost, because the representative model is typically highly non-linear and complex. Therefore, it is necessary to have a robust tool that can perform the uncertainty propagation with as few evaluations as possible. In the last few years, different methodologies for uncertainty propagation and quantification have been proposed. The focus of this study is to evaluate four different methods to demonstrate strengths and weaknesses of each approach. The first method considered is Monte Carlo simulation, a sampling method that can give high accuracy but needs a relatively large computational effort. The second method is Polynomial Chaos, an approximated method where the probabilistic parameters of the response function are modelled with orthogonal polynomials. The third method considered is Mid-range Approximation Method. This approach is based on the assembly of multiple meta-models into one model to perform optimization under uncertainty. The fourth method is the application of the first two methods not directly to the model but to a response surface representing the model of the simulation, to decrease computational cost. All these methods have been applied to a set of analytical test functions and engineering test cases. Relevant aspects of the engineering design and analysis such as high number of stochastic variables and optimised design problem with and without stochastic design parameters were assessed. Polynomial Chaos emerges as the most promising methodology, and was then applied to a turbomachinery test case based on a thermal analysis of a high-pressure turbine disk.


Author(s):  
Sajjad Yousefian ◽  
Gilles Bourque ◽  
Rory F. D. Monaghan

There is a need for fast and reliable emissions prediction tools in the design, development and performance analysis of gas turbine combustion systems to predict emissions such as NOx, CO. Hybrid emissions prediction tools are defined as modelling approaches that (1) use computational fluid dynamics (CFD) or component modelling methods to generate flow field information, and (2) integrate them with detailed chemical kinetic modelling of emissions using chemical reactor network (CRN) techniques. This paper presents a review and comparison of hybrid emissions prediction tools and uncertainty quantification (UQ) methods for gas turbine combustion systems. In the first part of this study, CRN solvers are compared on the bases of some selected attributes which facilitate flexibility of network modelling, implementation of large chemical kinetic mechanisms and automatic construction of CRN. The second part of this study deals with UQ, which is becoming an important aspect of the development and use of computational tools in gas turbine combustion chamber design and analysis. Therefore, the use of UQ technique as part of the generalized modelling approach is important to develop a UQ-enabled hybrid emissions prediction tool. UQ techniques are compared on the bases of the number of evaluations and corresponding computational cost to achieve desired accuracy levels and their ability to treat deterministic models for emissions prediction as black boxes that do not require modifications. Recommendations for the development of UQ-enabled emissions prediction tools are made.


2019 ◽  
Vol 141 (6) ◽  
Author(s):  
M. Giselle Fernández-Godino ◽  
S. Balachandar ◽  
Raphael T. Haftka

When simulations are expensive and multiple realizations are necessary, as is the case in uncertainty propagation, statistical inference, and optimization, surrogate models can achieve accurate predictions at low computational cost. In this paper, we explore options for improving the accuracy of a surrogate if the modeled phenomenon presents symmetries. These symmetries allow us to obtain free information and, therefore, the possibility of more accurate predictions. We present an analytical example along with a physical example that has parametric symmetries. Although imposing parametric symmetries in surrogate models seems to be a trivial matter, there is not a single way to do it and, furthermore, the achieved accuracy might vary. We present four different ways of using symmetry in surrogate models. Three of them are straightforward, but the fourth is original and based on an optimization of the subset of points used. The performance of the options was compared with 100 random designs of experiments (DoEs) where symmetries were not imposed. We found that each of the options to include symmetries performed the best in one or more of the studied cases and, in all cases, the errors obtained imposing symmetries were substantially smaller than the worst cases among the 100. We explore the options for using symmetries in two surrogates that present different challenges and opportunities: Kriging and linear regression. Kriging is often used as a black box; therefore, we consider approaches to include the symmetries without changes in the main code. On the other hand, since linear regression is often built by the user; owing to its simplicity, we consider also approaches that modify the linear regression basis functions to impose the symmetries.


2020 ◽  
Author(s):  
Shine Win Naung ◽  
Mohammad Rahmati ◽  
Hamed Farokhi

Abstract The high-fidelity computational fluid dynamics (CFD) simulations of a complete wind turbine model usually require significant computational resources. It will require much more resources if the fluid-structure interactions between the blade and the flow are considered, and it has been the major challenge in the industry. The aeromechanical analysis of a complete wind turbine model using a high-fidelity CFD method is discussed in this paper. The distinctiveness of this paper is the application of the nonlinear frequency domain solution method to analyse the forced response and flutter instability of the blade as well as to investigate the unsteady flow field across the wind turbine rotor and the tower. This method also enables the aeromechanical simulations of wind turbines for various inter blade phase angles in a combination with a phase shift solution method. Extensive validations of the nonlinear frequency domain solution method against the conventional time domain solution method reveal that the proposed frequency domain solution method can reduce the computational cost by one to two orders of magnitude.


Sign in / Sign up

Export Citation Format

Share Document