ASME 2020 Verification and Validation Symposium
Latest Publications


TOTAL DOCUMENTS

14
(FIVE YEARS 14)

H-INDEX

0
(FIVE YEARS 0)

Published By American Society Of Mechanical Engineers

9780791883594

Author(s):  
Kevin Irick ◽  
Nima Fathi

Abstract The complexity of conductive heat transfer in a structure increases with heterogeneity (e.g., multi-component solid-phase systems with a source of internal thermal heat generation). Any discontinuity of material property — especially thermal conductivity — would warrant a thorough analysis to evaluate the thermal behavior of the system of interest. Heterogeneous thermal conditions are crucial to heat transfer in nuclear fuel assemblies, because the thermal behavior within the assemblies is governed significantly by the heterogeneous thermal conditions at both the system and component levels. A variety of materials have been used as nuclear fuels, the most conventional of which is uranium dioxide, UO2. UO2 has satisfactory chemical and irradiation tolerances in thermal reactors, whereas the low thermal conductivity of porous UO2 can prove challenging. Therefore, the feasibility of enhancing the thermal conductivity of oxide fuels by adding a high-conductivity secondary solid component is still an important ongoing topic of investigation. Undoubtedly, long-term, stable development of clean nuclear energy would depend on research and development of innovative reactor designs and fuel systems. Having a better understanding of the thermal response of the unit cell of a composite that represents a fuel matrix cell would help to develop the next generation of nuclear fuel and understand potential performance enhancements. The aim of this article is to provide an assessment of a high-fidelity computational model response of heterogeneous materials with heat generation in circular fillers. Two-dimensional, steady-state systems were defined with a circular, heat-generating filler centered in a unit-cell domain. A Fortran-based finite element method (FEM) code was used to solve the heat equation on an unstructured triangular mesh of the systems. This paper presents a study on the effects of a heat-generating filler material’s relative size and thermal conductivity on effective thermal conductance, Geff, within a heterogenous material. Code verification using the method of manufactured solution (MMS) was employed, showing a second-order accurate numerical implementation. Solution verification was performed using a global deviation grid convergence index (GCI) method to assess solution convergence and estimate solution numerical uncertainty, Unum. Trend results are presented, showing variable response in Geff to filler size and thermal conductivity.


Author(s):  
Kevin W. Irick ◽  
Jeff Engerer ◽  
Blake Lance ◽  
Scott A. Roberts ◽  
Ben Schroeder

Abstract Empirically-based correlations are commonly used in modeling and simulation but rarely have rigorous uncertainty quantification that captures the nature of the underlying data. In many applications, a mathematical description for a parameter response to some input stimulus is often either unknown, unable to be measured, or both. Likewise, the data used to observe a parameter response is often noisy, and correlations are derived to approximate the bulk response. Practitioners frequently treat the chosen correlation — sometimes referred to as the “surrogate” or “reduced-order” model of the response — as a constant mathematical description of the relationship between input and output. This assumption, as with any model, is incorrect to some degree, and the uncertainty in the correlation can potentially have significant impacts on system responses. Thus, proper treatment of correlation uncertainty is necessary. In this paper, a method is proposed for high-level abstract sampling of uncertain data correlations. Whereas uncertainty characterization is often assigned to scalar values for direct sampling, functional uncertainty is not always straightforward. A systematic approach for sampling univariable uncertain correlations was developed to perform more rigorous uncertainty analyses and more reliably sample the correlation space. This procedure implements pseudo-random sampling of a correlation with a bounded input range to maintain the correlation form, to respect variable uncertainty across the range, and to ensure function continuity with respect to the input variable.


Author(s):  
Tyler J. Remedes ◽  
Scott D. Ramsey ◽  
Joseph H. Schmidt ◽  
James Baciak

Abstract In the past when faced with solving a non-tractable problem, scientists would make tremendous efforts to simplify these problems while preserving fundamental physics. Solutions to the simplified models provided insight into the original problem. Today, however, the affordability of high-performance computing has inverted the process for analyzing complex problems. In this paradigm, results from detailed computational scenarios can be better assessed by “building down” the complex model through simple models rooted in the fundamental or essential phenomenology. This work demonstrates how the analysis of the neutron flux spatial distribution behavior within a simulated Holtec International HI-STORM 100 spent fuel cask is enhanced through reduced complexity analytic and computational modeling. This process involves identifying features in the neutron flux spatial distribution and determining the cause of each using reduced complexity computational and/or analytic model. Ultimately, confidence in the accuracy of the original simulation result is gained through this analysis process.


Author(s):  
Shaun Eshraghi ◽  
Michael Carolan ◽  
Benjamin Perlman ◽  
Francisco González

Abstract The U.S. Department of Transportation’s Federal Railroad Administration (FRA) has sponsored a series of full-scale dynamic shell impact tests on railroad tank cars. For each shell impact test a pre-test finite element (FE) model is created to predict the overall force-time or force-displacement histories of the impactor, puncture/non-puncture outcomes of the impacted tank shell, global motions of the tank car, internal pressures within the tank, and the energy absorbed by the tank during the impact. While qualitative comparisons (e.g. the shapes of the indentation) and quantitative comparisons (e.g. peak impact forces) have been made between tests and simulations, there are currently no standards or guidelines on how to compare the simulation results with the test results, or what measurable level of agreement would be an acceptable demonstration of model validation. It is desirable that a framework for model validation, including well-defined criteria for comparison, be developed or adopted if FE analysis is to be used without companion full-scale shell impact testing for future tank car development. One of the challenges to developing model validation criteria and procedures for tank car shell puncture is the number of complex behaviors encountered in this problem, and the variety of approaches that could be used in simulating these behaviors. The FE models used to simulate tank car shell impacts include several complex behaviors, which increase the level of uncertainty in simulation results, including dynamic impacts, non-linear steel material behavior, two-phase (water and air) fluid-structure interaction, and contact between rigid and deformable bodies. Approaches to model validation employed in other areas of transportation where validation procedures have been documented are applied to railroad tank car dynamic shell impact FE simulation results. This work compares and contrasts two model validation programs: Roadside Safety Verification and Validation Program (RSVVP) and Correlation and Analysis Plus (CORA). RSVVP and CORA are used to apply validation metrics and ratings specified by the National Cooperative Highway Research Program Project 22-24 (NCHRP 22-24) and ISO/TS 18571:2014 respectively. The validation methods are applied to recently-completed shell impact tests on two different types of railroad tank cars sponsored by the FRA. Additionally, this paper includes discussion on model validation difficulties unique to dynamic impacts involving puncture.


Author(s):  
Xin Gao ◽  
Weiyong Gu

Abstract Intervertebral disc (IVD) degeneration may cause low back pain which has a tremendous impact on the society and economy in the United States. It is important to quantitatively and qualitatively evaluate its pathophysiology in order to diagnose and treat disc degeneration. Recently, we have developed a multiphasic computational model for investigating cell mediated disc degeneration as well as exploring new strategies for disc therapies. The objective of this study was to verify this new computational model according to the guidelines of ASME V&V40. The model was discretized with finite element method and implemented in COMSOL Multiphysics. Several benchmark problems and method of manufactured solutions (MMS) were used to verify the numerical implementation. For all the benchmark problems tested, the numerical results were in excellent agreement with those analytical solutions or other numerical solutions. In addition, the observed convergence rates of primary unknowns obtained with MMS were in excellent agreement with theoretical convergence rates. This study showed that our model has been verified and found no evidence of coding errors.


Author(s):  
Jun Guo ◽  
Daniel Segalman

Abstract In the ordinary process of estimating uncertainty in model predictions one usually looks to some set of calibration experiments from which the model can be parameterized and then the resulting discrete set of model parameters are used to approximate the joint probability distribution of parameter vectors. That parameter uncertainty is propagated through the model to obtain predictive uncertainty. A key observation here is that usually, the modeler will attempt to find a unique “best” vector of parameters to match each calibration experiment and these “best” parameter vectors are used to estimate parameter uncertainty. In the work presented here, it is shown how for complex models — having more than a few parameters — it can happen that each experiment can befit equally well by a multitude of parameter vectors. It is also shown that when these large numbers of candidate parameter vectors are compiled the resulting model predictions may manifest substantially more variance than would be the case without consideration of the non-uniqueness issue. The contribution of non-uniqueness to prediction uncertainty is illustrated on two very different sorts of model. In the first case Johnson-Cook models for a titanium alloy are parameterized to match calibration experiments on three different alloy samples at different temperatures and strain rates. The resulting ensemble of parameter vectors are used to predict peak stress in a different experiment. In the second case, an epidemiological model is calibrated to history data and the parameter vectors are used to calculate a quantity of interest and uncertainty of that quantity.


Author(s):  
Luís Eça ◽  
Filipe S. Pereira ◽  
Guilherme Vaz ◽  
Rui Lopes ◽  
Serge Toxopeus

Abstract The independence of numerical and parameter uncertainties is investigated for the flow around the KVLCC2 tanker at Re = 4.6 × 106 using the time-averaged RANS equations supplemented by the k–ω two-equation SST model. The uncertain input parameter is the inlet velocity that varies ±0.25% and ±0.50% for the determination of sensitivity coefficients using finite-difference approximations. The quantities of interest are the friction and pressure coefficients of the ship and the Cartesian velocity components and turbulence kinetic energy at the propeller plane. A grid refinement study is performed for the nominal conditions to allow the estimation of the discretization error with power series expansions. However, for grids between 6 × 106 and 47.6 × 106 cells, not all the selected quantities of interest exhibit monotonic convergence. Therefore, the estimates of the sensitivity coefficients of the selected quantities of interest using the local sensitivity method and finite-differences performed for refinement levels that correspond to 0.764 × 106, 6 × 106 and 47.6 × 106 cells lead to significantly different values. Nonetheless, for a given grid, negligible differences are obtained for the sensitivity coefficients obtained with two different intervals in the finite-differences approximation. Discrepancies between sensitivity coefficients are compared with the estimated numerical uncertainties. Results obtained in the study suggest that uncertainty quantification performed in coarse grids may be significantly affected by discretization errors.


Author(s):  
John W. Grove ◽  
Adam C. Coleman ◽  
Carl E. Johnson ◽  
Ralph Menikoff

Abstract A computational verification and validation study of the Cyclops I experiment [1–7] was conducted using the Los Alamos Eulerian Applications code xRage [8]. The purpose of this study was to validate the Scaled Unified Reactive Front (SURF) plus (SURFplus) model for insensitive high explosives [9–12]. Diagnostics from the experiment included photon doppler velocimetry measurements of the encasing shell for the device and proton radiography photographs of the explosions. This data was compared to the xRage computed data and a convergence study of burn front evolution was conducted. We conclude that the SURFplus high explosive model does an excellent job at predicting the high explosive burn front velocity and shape with results that converge to the experimental data at rates near to or better than first order in most cases. Some companion verification metrics for the solution convergence are also described. These metrics show that the xRage computed solution for the high explosive burn front converges to first order or better, as consistent with the treatment of shock fronts in a higher order Godunov hydrodynamic solver as used in xRage.


Author(s):  
Jason Thompson ◽  
Christopher Boyd

Abstract The US Nuclear Regulatory Commission (NRC) participated in an Organization for Economic Cooperation and Development / Nuclear Energy Agency (OECD/NEA) benchmark activity based on testing in the PANDA facility located at the Paul Scherrer Institute in Switzerland. In this test, a stratified helium layer was eroded by a turbulent jet from below. NRC participated in this benchmark to develop expertise and modeling guidelines for computational fluid dynamics (CFD) in anticipation of utilizing these methods for future safety and confirmatory analyses. CFD predictions using ANSYS FLUENT V19.0 are benchmarked using the PANDA test data, and sensitivity studies are used to evaluate the significance of key phenomena, such as boundary conditions and modeling options, that impact the helium erosion rates and jet velocity distribution. The k-epsilon realizable approach with second order differencing resulted in the best prediction of the test data. The most significant phenomena are found to be the inlet mass flowrate and turbulent Schmidt number. CFD uncertainty for helium and velocity due to numerical error and input parameter uncertainty are predicted using a sensitivity coefficient approach. Numerical uncertainty resulting from the mesh design is estimated using a Grid Convergence Index (GCI) approach. Meshes of 0.5, 1.5 (base mesh), and 4.5 million cells are used for the GCI. Approximately second order grid convergence was observed but p (order of convergence) values from 1 to 5 were common. The final helium predictions with one-sigma uncertainty interval generally bounded the experimental data. The predicted jet centerline velocity was approximately 50% of the measured value at multiple measurement locations. This velocity benchmark is likely affected by the difference in the He content between the experiment and prediction. The predicted jet centerline velocity with the one-sigma uncertainty interval did not bound the experimental data.


Author(s):  
Prasad Vegendla ◽  
Rui Hu ◽  
Ling Zou

Abstract In High Temperature Gas-cooled Reactors (HTGR), gas flow patterns are very complex and reduced order models (1D or 2D) may be too simplified to predict accurate reactor performance. 3D Computational Fluid Dynamics (CFD) models can help provide the detailed information needed to optimize the reactor thermal performance. The main objective of this work is to validate the CFD models with data of a 1/16th scaled Very High Temperature Reactor (VHTR) upper plenum measured at Texas A&M University. In this paper, the flow characteristics of a single isothermal jet discharging into the upper plenum was investigated using Nek5000 Large-Eddy Simulation (LES) CFD tool. Several numerical simulations were performed for various jet Reynolds numbers ranging from 3,413 to 12,819. Grid independent study was performed. The numerical results of mean velocity, root-mean-square fluctuating velocity, and Reynolds stress were validated with the benchmark data. Good agreement was obtained between simulated and measured data for axial mean velocities, except near the upper plenum hemisphere. The maximum predicted errors for axial mean velocities at various normalized coolant channel diameter heights of 1, 5 and 10 are 1.56%, 1.88% and 3.82%, respectively. Also, the predicted root-mean-square fluctuating velocity and Reynolds stress are qualitatively agreed with the experimental data.


Sign in / Sign up

Export Citation Format

Share Document