ASME 2019 Verification and Validation Symposium
Latest Publications


TOTAL DOCUMENTS

17
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By American Society Of Mechanical Engineers

9780791841174

Author(s):  
David Cheng

Abstract Data from the DCS systems provides important information about the performance and transportation efficiency of a gas pipeline with compressor stations. The pipeline performance data provides correction factors for compressors as part of the operation optimization of natural gas transmission pipelines. This paper presents methods, procedure, and a real life example of model validation based performance analysis of gas pipeline. Statistic methods are demonstrated with real gas pipeline measurement data. The methods offer practical ways to validate the pipeline hydraulics model using the DCS data. The validated models are then used as performance analysis tools in evaluating the fundamental physical parameters and assessing the pipeline hydraulics conditions for potential issues influencing pressure drops in the pipeline such as corrosion (ID change), roughness changes, or BSW deposition.


Author(s):  
Stephen A. Andrews ◽  
Andrew M. Fraser ◽  
Scott I. Jackson ◽  
Eric K. Anderson

Abstract The extreme pressures and temperatures of the gas produced by detonating a High Explosive (HE) make it difficult to use experimental measurements to estimate the Equation Of State (EOS), the physics model that relates pressure, temperature, and density of the gas. Instead of measuring pressure directly one measures effects like the acceleration of metals driven by the HE. Typically one fits a few free parameters in a fixed functional form to measurements from a single experiment. The present work uses the optimization tool F_UNCLE to incorporate data from multiple experiments into a single EOS model for the gas produced by detonating the explosive PBX 9501. The model is verified by comparison to an experiment from outside the set of calibration data. The uncertainty in the EOS is also is examined to determine how each calibration experiment constrains the model and how the uncertainty arising from all calibration experiments affects predictions. This work identifies an EOS for HE detonation products and uncertainty about the EOS.


Author(s):  
Kyle Haas

Abstract Astonishing increases in computational power have fueled the engineering community’s drive to seek increasingly optimized solutions to structural design problems. Although structural optimization can be critical to achieve a practical and cost-effective design, optimization often comes at a cost to reliability. The competing goals of optimization and reliability amplify the importance of validation, verification, and uncertainty quantification efforts to achieve sufficiently reliable performance. Evaluation of a structural system’s reliability presents a practical challenge to designers given the potentially large number of permutations of conditions that may exist over the full operational lifecycle. A direct prediction of performance and the prediction’s corresponding likelihood is often achieved via deterministic analysis techniques in conjunction with Monte Carlo analysis. Such methods can be overly cumbersome and often do not provide a complete picture of the system’s global reliability due to the practical limits of performing the necessary number of analyses. At the point of incipient structural failure, the structural response becomes highly variable and sensitive to minor perturbations in conditions. This characteristic provides a powerful opportunity to determine the critical failure conditions and to assess the resulting structural reliability through an alternative, but more expedient variability-based method. Non-hierarchical clustering, proximity analysis, and the use of stability indicators are combined to identify the loci of conditions that lead to a rapid evolution of the structural response toward a failure condition. The utility of the proposed method is demonstrated through its application to a simple nonlinear dynamic single-degree-of-freedom structural model. A feedforward artificial neural network is trained from numerically-generated data to provide an expedient means of assessing the system’s behavior under perturbed conditions. In addition to the L2-norm, a new stability indicator is proposed called the “Instability Index”, which is a function of both the L2-norm and the calculated proximity to adjacent loci of conditions with differing structural response. The Instability Index provides a rapidly achieved quantitative measure of the relative stability of the system for all possible loci of conditions.


Author(s):  
Kevin Irick ◽  
Nima Fathi

Abstract In the power plant industry, the turbine inlet temperature (TIT) plays a key role in the efficiency of the gas turbine and, therefore, the overall — in most cases combined — thermal power cycle efficiency. Gas turbine efficiency increases by increasing TIT. However, an increase of TIT would increase the turbine component temperature which can be critical (e.g., hot gas attack). Thermal barrier coatings (TBCs) — porous media coatings — can avoid this case and protect the surface of the turbine blade. This combination of TBC and film cooling produces a better cooling performance than conventional cooling processes. The effective thermal conductivity of this composite is highly important in design and other thermal/structural assessments. In this article, the effective thermal conductivity of a simplified model of TBC is evaluated. This work details a numerical study on the steady-state thermal response of two-phase porous media in two dimensions using personal finite element analysis (FEA) code. Specifically, the system response quantity (SRQ) under investigation is the dimensionless effective thermal conductivity of the domain. A thermally conductive matrix domain is modeled with a thermally conductive circular pore arranged in a uniform packing configuration. Both the pore size and the pore thermal conductivity are varied over a range of values to investigate the relative effects on the SRQ. In this investigation, an emphasis is placed on using code and solution verification techniques to evaluate the obtained results. The method of manufactured solutions (MMS) was used to perform code verification for the study, showing the FEA code to be second-order accurate. Solution verification was performed using the grid convergence index (GCI) approach with the global deviation uncertainty estimator on a series of five systematically refined meshes for each porosity and thermal conductivity model configuration. A comparison of the SRQs across all domain configurations is made, including uncertainty derived through the GCI analysis.


Author(s):  
Zachary Hargett ◽  
Manuel Gutierrez ◽  
Melinda Harman

Abstract Cadaveric testing is a common approach for verifying mathematical models used in computational modeling work. In the case of a knee joint model for calculating ligament tension during total knee replacement (TKR) motion, model inputs include rigid body motions defined using the Grood-Suntay coordinate system as a spatial linkage between the tibial component orientation relative to the femoral component. Using this approach requires the definition of coordinate systems for each rigid TKR component (i.e. tibial and femoral) based on fiducial points, manual digitization of a point cloud within the experimental setup, and registration of the orientation relative to the relevant bone marker array. The purpose of this study was to compare the variability between two different manual point digitization methods (a hand-held stylus and pivot tool each calibrated in the optical tracking system), using a TKR femoral component in a simulated cadaver limb experimental setup as an example. This was accomplished by verifying the mathematical algorithm used to calculate the coordinate system from the digitized points, quantifying the variability of the manual digitization methods, and discussing how any error could affect the computational model. For the hand-held stylus method, the standard deviation of the origin and, x-, y-, and z-axis calculations were 0.50mm, 1.31 degrees, 0.51 degrees, and 0.62 degrees, respectively. It is important to note that there is an additional error created using the hand-held stylus from required manual digitization of each rigid marker array. This average additional error was 0.54mm for the origin and 1.70, 1.66, and 0.98 degrees for the x-, y-, and z-axes, respectively. For the pivot tool method, the standard deviation was 0.35mm, 0.37 degrees, 1.27 degrees, and 1.24 degrees for the origin, x-, y-, and z-axes, respectively. It is essential to minimize experimental error, as small errors in alignment can substantially alter model outputs. In this study of cadaver simulation of limb motion, the pivot tool is the better option for minimizing error. Careful definition of fiducial points and repeatable manual digitization of the point cloud is critical for meaningful computational models of TKR motion based on cadaver experimental work.


Author(s):  
Aaron M. Krueger ◽  
Vincent A. Mousseau ◽  
Yassin A. Hassan

Abstract The Method of Manufactured Solutions (MMS) has proven to be useful for completing code verification studies. MMS allows the code developer to verify that the observed order-of-accuracy matches the theoretical order-of accuracy. Even though the solution to the partial differential equation is not intuitive, it provides an exact solution to a problem that most likely could not be solved analytically. The code developer can then use the exact solution as a debugging tool. While the order-of-accuracy test has been historically treated as the most rigorous of all code verification methods, it fails to indicate code “bugs” that are of the same order as the theoretical order-of-accuracy. The only way to test for these types of code bugs is to verify that the theoretical local truncation error for a particular grid matches the difference between the manufactured solution (MS) and the solution on that grid. The theoretical local truncation error can be computed by using the modified equation analysis (MEA) with the MS and its analytic derivatives, which we call modified equation analysis method of manufactured solutions (MEAMMS). In addition to describing the MEAMMS process, this study shows the results of completing a code verification study on a conservation of mass code. The code was able to compute the leading truncation error term as well as additional higher-order terms. When the code verification process was complete, not only did the observed order-of-accuracy match the theoretical order-of-accuracy for all numerical schemes implemented in the code, but it was also able to cancel the discretization error to within roundoff error for a 64-bit system.


Author(s):  
Emily L. Guzas ◽  
Stephen E. Turner ◽  
Matthew Babina ◽  
Brandon Casper ◽  
Thomas N. Fetherston ◽  
...  

Abstract Primary blast injury (PBI), which relates gross blast-related trauma or traces of injury in air-filled tissues or those tissues adjacent to air-filled regions (rupture/lesions, contusions, hemorrhaging), has been documented in a number of marine mammal species after blast exposure [1, 2, 3]. However, very little is known about marine mammal susceptibility to PBI except in rare cases of opportunistic studies. As a result, traditional techniques rely on analyses using small-scale terrestrial mammals as surrogates for large-scale marine mammals. For an In-house Laboratory Independent Research (ILIR) project sponsored by the Office of Naval Research (ONR), researchers at the Naval Undersea Warfare Center, Division Newport (NUWCDIVNPT), have undertaken a broad 3-year effort to integrate computational fluid-structure interaction techniques with marine mammal anatomical structure. The intent is to numerically simulate the dynamic response of a marine mammal thoracic cavity and air-filled lungs to shock loading, to enhance understanding of marine mammal lungs to shock loading in the underwater environment. In the absence of appropriate test data from live marine mammals, a crucial part of this work involves code validation to test data for a suitable surrogate test problem. This research employs a surrogate of an air-filled spherical membrane structure subjected to shock loading as a first order approximation to understanding marine mammal lung response to underwater explosions (UNDEX). This approach incrementally improves upon the currently used one-dimensional spherical air bubble approximation to marine mammal lung response by providing an encapsulating boundary for the air. The encapsulating structure is membranous, with minimal simplified representation not accounting for marine mammal species-specific and individual animal differences in tissue composition, rib mechanics, and mechanical properties of interior lung tissue. NUWCDIVNPT partnered with the Naval Submarine Medical Research Laboratory (NSMRL) to design and execute a set of experiments to investigate the shock response of an air-filled rubber dodgeball in a shallow underwater environment. These tests took place in the 2.13 m (7-ft) diameter pressure tank at the University of Rhode Island, with test measurements including pressure data and digital image correlation (DIC) data captured with high-speed cameras in a stereo setup. The authors developed 3-dimensional computational models of the dodgeball experiments using Dynamic System Mechanics Advanced Simulation (DYSMAS), a Navy fluid-structure interaction code. DYSMAS models of a variety of different problems involving submerged pressure vessel structures responding to hydrostatic and/or UNDEX loading have been validated against test data [4]. Proper validation of fluid structure interaction simulations is quite challenging, requiring measurements in both the fluid and structure domains. This paper details the development of metrics for comparison between test measurements and simulation results, with a discussion of potential sources of uncertainty.


Author(s):  
Prasad Vegendla ◽  
Rui Hu

Abstract The paper discusses the modeling and simulations of Deteriorated Turbulent Heat Transfer (DTHT) for a wall-heated fluid flows, which can be observed in gas-cooled nuclear power reactors during Pressurized Conduction Cooldown (PCC) event due to loss of force circulation flow. The DTHT regime is defined as the deterioration of normal turbulent heat transport due to increase of acceleration and buoyancy forces. The Computational Fluid Dynamics (CFD) tools such as Nek5000 and STAR-CCM+ can help to analyze the DTHT phenomena in reactors for efficient thermal-fluid designs. 3D CFD non-isothermal modeling and simulations were performed in a wall-heated circular tube. The simulation results were verified with two different CFD tools, Nek5000 and STAR-CCM+, and validated with an experimental data. The predicted bulk temperatures were identical in both CFD tools, as expected. Good agreement between simulated results and measured data were obtained for wall temperatures along the tube axis using Nek5000. In STAR-CCM+, the under-predicted wall temperatures were mainly due to higher turbulence in the wall region. In STAR-CCM+, the predicted DTHT was over 48% at outlet when compared to inlet heat transfer values.


Author(s):  
Kimbal Hall ◽  
Abdelghani Zigh ◽  
Jorge Solis

Abstract CFD is a valuable tool for showing compliance with peak temperatures in dry cask storage systems (DCSS). When demonstrating compliance, it is valuable to quantify the uncertainty in the simulation result as a function of the computational mesh and simulation inputs. USNRC was a participant in a CFD validation test using the TN-32B cask, with extensive temperature measurements throughout the DCSS, including measurements on the cask surface and in fuel bundles. This paper discusses validation and uncertainty quantification of a CFD model using experimental data. Uncertainty quantification follows the procedures outlined in ASME V&V20-2009 [1]. Sources of uncertainty that were examined in the analysis include iterative uncertainty, spacial discretization, and uncertainty due to approximately twenty input parameters. Input parameters investigated include environmental conditions, material properties, decay heat, and the spacing of the many small gaps in the installation. The uncertainty in gap size was found to be a particularly large source of uncertainty in this particular installation. Results of a “base case” using the conservative estimates outlined in the updated final safety analysis report (UFSAR) [2] are presented, as well as a “best estimate case” that uses more realistic values. These results are compared to experimentally measured values, which fall within the uncertainty band of the analysis. This work is also the subject of an upcoming NUREG/CR.


Author(s):  
Charles F. Jekel ◽  
Vicente Romero

Abstract Tolerance Interval Equivalent Normal (TI-EN) and Superdistribution (SD) sparse-sample uncertainty quantification (UQ) methods are used for conservative estimation of small tail probabilities. These methods are used to estimate the probability of a response laying beyond a specified threshold with limited data. The study focused on sparse-sample regimes ranging from N = 2 to 20 samples, because this is reflective of most experimental and some expensive computational situations. A tail probability magnitude of 10−4 was examined on four different distribution shapes, in order to be relevant for quantification of margins and uncertainty (QMU) problems that arise in risk and reliability analyses. In most cases the UQ methods were found to have optimal performance with a small number of samples, beyond which the performance deteriorated as samples were added. Using this observation, a generalized Jackknife resampling technique was developed to average many smaller subsamples. This improved the performance of the SD and TI-EN methods, specifically when a larger than optimal number of samples were available. A Complete Jackknifing technique, which considered all possible sub-sample combinations, was shown to perform better in most cases than an alternative Bootstrap resampling technique.


Sign in / Sign up

Export Citation Format

Share Document