A Critical Assessment of Kriging Model Variants for High-Fidelity Uncertainty Quantification in Dynamics of composite Shells

2016 ◽  
Vol 24 (3) ◽  
pp. 495-518 ◽  
Author(s):  
T. Mukhopadhyay ◽  
S. Chakraborty ◽  
S. Dey ◽  
S. Adhikari ◽  
R. Chowdhury
2021 ◽  
Author(s):  
Francesco Rizzi ◽  
Eric Parish ◽  
Patrick Blonigan ◽  
John Tencer

<p>This talk focuses on the application of projection-based reduced-order models (pROMs) to seismic elastic shear waves. Specifically, we present a method to efficiently propagate parametric uncertainties through the system using a novel formulation of the Galerkin ROM that exploits modern many-core computing nodes.</p><p>Seismic modeling and simulation is an active field of research because of its importance in understanding the generation, propagation and effects of earthquakes as well as artificial explosions. We stress two main challenges involved: (a) physical models contain a large number of parameters (e.g., anisotropic material properties, signal forms and parametrizations); and (b) simulating these systems at global scale with high-accuracy requires a large computational cost, often requiring days or weeks on a supercomputer. Advancements in computing platforms have enabled researchers to exploit high-fidelity computational models, such as highly-resolved seismic simulations, for certain types of analyses. Unfortunately, for analyses requiring many evaluations of the forward model (e.g., uncertainty quantification, engineering design), the use of high-fidelity models often remains impractical due to their high computational cost. Consequently, analysts often rely on lower-cost, lower-fidelity surrogate models for such problems.</p><p>Broadly speaking, surrogate models fall under three categories, namely (a) data fits, which construct an explicit mapping (e.g., using polynomials, Gaussian processes) from the system's parameters to the system response of interest, (b) lower-fidelity models, which simplify the high-fidelity model (e.g., by coarsening the mesh, employing a lower finite-element order, or neglecting physics), and (c) pROMs which reduce the number of degrees of freedom in the high-fidelity model by a projection process of the full-order model onto a subspace identified from high-fidelity data. The main advantage of pROMs is that they apply a projection process directly to the equations governing the high-fidelity model, thus enabling stronger guarantees (e.g., of structure preservation or of accuracy) and more accurate a posteriori error bounds.</p><p>State-of-the-art Galerkin ROM formulations express the state as a rank-1 tensor (i.e., a vector), leading to computational kernels that are memory bandwidth bound and, therefore, ill-suited for scalable performance on modern many-core and hybrid computing nodes. In this work, we introduce a reformulation, called rank-2 Galerkin, of the Galerkin ROM for linear time-invariant (LTI) dynamical systems which converts the nature of the ROM problem from memory bandwidth to compute bound, and apply it to elastic seismic shear waves in an axisymmetric domain. Specifically, we present an end-to-end demonstration of using the rank-2 Galerkin ROM in a Monte Carlo sampling study, showing that the rank-2 Galerkin ROM is 970 times more efficient than the full order model, while maintaining excellent accuracy in both the mean and statistics of the field.</p>


2021 ◽  
Vol 2 (1) ◽  
pp. 44-56
Author(s):  
Maria Avramova ◽  
Agustin Abarca ◽  
Jason Hou ◽  
Kostadin Ivanov

This paper provides a review of current and upcoming innovations in development, validation, and uncertainty quantification of nuclear reactor multi-physics simulation methods. Multi-physics modelling and simulations (M&S) provide more accurate and realistic predictions of the nuclear reactors behavior including local safety parameters. Multi-physics M&S tools can be subdivided in two groups: traditional multi-physics M&S on assembly/channel spatial scale (currently used in industry and regulation), and novel high-fidelity multi-physics M&S on pin (sub-pin)/sub-channel spatial scale. The current trends in reactor design and safety analysis are towards further development, verification, and validation of multi-physics multi-scale M&S combined with uncertainty quantification and propagation. Approaches currently applied for validation of the traditional multi-physics M&S are summarized and illustrated using established Nuclear Energy Agency/Organization for Economic Cooperation and Development (NEA/OECD) multi-physics benchmarks. Novel high-fidelity multi-physics M&S allow for insights crucial to resolve industry challenge and high impact problems previously impossible with the traditional tools. Challenges in validation of novel multi-physics M&S are discussed along with the needs for developing validation benchmarks based on experimental data. Due to their complexity, the novel multi-physics codes are still computationally expensive for routine applications. This fact motivates the use of high-fidelity novel models and codes to inform the low-fidelity traditional models and codes, leading to improved traditional multi-physics M&S. The uncertainty quantification and propagation across different scales (multi-scale) and multi-physics phenomena are demonstrated using the OECD/NEA Light Water Reactor Uncertainty Analysis in Modelling benchmark framework. Finally, the increasing role of data science and analytics techniques in development and validation of multi-physics M&S is summarized.


Author(s):  
Matteo Diez ◽  
Riccardo Broglia ◽  
Danilo Durante ◽  
Angelo Olivieri ◽  
Emilio Campana ◽  
...  

Author(s):  
Kai Zhou ◽  
Pei Cao ◽  
Jiong Tang

Uncertainty quantification is an important aspect in structural dynamic analysis. Since practical structures are complex and oftentimes need to be characterized by large-scale finite element models, component mode synthesis (CMS) method is widely adopted for order-reduced modeling. Even with the model order-reduction, the computational cost for uncertainty quantification can still be prohibitive. In this research, we utilize a two-level Gaussian process emulation to achieve rapid sampling and response prediction under uncertainty, in which the low- and high-fidelity data extracted from CMS and full-scale finite element model are incorporated in an integral manner. The possible bias of low-fidelity data is then corrected through high-fidelity data. For the purpose of reducing the emulation runs, we further employ Bayesian inference approach to calibrate the order-reduced model in a probabilistic manner conditioned on multiple predicted response distributions of concern. Case studies are carried out to validate the effectiveness of proposed methodology.


Author(s):  
Ken Nahshon ◽  
Nicholas Reynolds ◽  
Michael D. Shields

Uncertainty quantification (UQ) and propagation are critical to the computational assessment of structural components and systems. In this work, we discuss the practical challenges of implementing uncertainty quantification for high-dimensional computational structural investigations, specifically identifying four major challenges: (1) Computational cost; (2) Integration of engineering expertise; (3) Quantification of epistemic and model-form uncertainties; and (4) Need for V&V, standards, and automation. To address these challenges, we propose an approach that is straightforward for analysts to implement, mathematically rigorous, exploits analysts' subject matter expertise, and is readily automated. The proposed approach utilizes the Latinized partially stratified sampling (LPSS) method to conduct small sample Monte Carlo simulations. A simplified model is employed and analyst expertise is leveraged to cheaply investigate the best LPSS design for the structural model. Convergence results from the simplified model are then used to design an efficient LPSS-based uncertainty study for the high-fidelity computational model investigation. The methodology is carried out to investigate the buckling strength of a typical marine stiffened plate structure with material variability and geometric imperfections.


Author(s):  
Yiming Liu ◽  
Ruihong Qin ◽  
Yaping Ju ◽  
Stephen Spence ◽  
Chuhua Zhang

Abstract Centrifugal impellers are inevitably subjected to manufacturing uncertainties during the machining process due to many factors. Such manufacturing uncertainties resulting in geometrical variations lead to impeller performance degradation. For a transonic impeller, the complexity of the flow field may amplify this deterioration. In view of this, it is important to have a clear understanding of the effect caused by manufacturing uncertainties. However, relevant studies are rare and lack consideration of realistic manufacturing uncertainties. Furthermore, the high dimensionality caused by high-fidelity uncertainty model makes the computational fluid dynamics (CFD) unaffordable, making the development of high-efficiency and high-fidelity method for high-dimensionality uncertainty quantification (UQ) problems become urgent. To tackle these limitations, a group of 92 machined centrifugal impellers were scanned, and a statistical model of realistic manufacturing uncertainties was built. With the combination of the CFD simulations and Non–Intrusive Polynomial Chaos (NIPC) methods, the influence of manufacturing uncertainties on the polytropic efficiency and flow field of a transonic centrifugal impeller was quantified. To achieve a good trade-off between the computational efficiency and accuracy for the UQ, a Dual Dimensionality Reduction (DDR) method was proposed, by which the dimensionality of the spatially varying input, i.e., the manufacturing error field, was reduced to 3. The results showed that the manufacturing errors of machined impellers follow Gaussian distributions with a mean error of zero and a shape increased standard deviation near the blade leading edge. The polytropic efficiency of the examined impeller exhibited a negatively skewed distribution and the mean efficiency was reduced by 0.34%. The flow mechanisms behind the performance degradations mainly lay in the increased shock losses near the blade tip and separation losses near the hub. The present study provides a fundamental contribution to the uncertainty quantification of turbomachinery and establishes a theoretical foundation for the development of robust centrifugal impellers.


Sign in / Sign up

Export Citation Format

Share Document