ASME 2018 Verification and Validation Symposium
Latest Publications


TOTAL DOCUMENTS

13
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By American Society Of Mechanical Engineers

9780791840795

Author(s):  
Francisco Gonzalez ◽  
Anand Prabhakaran ◽  
Graydon F. Booth ◽  
Florentina M. Gantoi ◽  
Arkaprabha Ghosh

Critical derailment incidents associated with crude oil and ethanol transport have led to a renewed focus on improving the performance of tank cars against the potential for puncture under derailment conditions. Proposed strategies for improving accident performance have included design changes to tank cars, as well as, operational considerations such as reduced speeds. In prior publications, the authors have described the development of a novel methodology for quantifying and characterizing the reductions in risk that result from changes to tank car designs or the tank car operating environment. The methodology considers key elements that are relevant to tank car derailment performance, including variations in derailment scenarios, chaotic derailment dynamics, nominal distributions of impact loads and impactor sizes, operating speed differences, and variations in tank car designs, and combines these elements into a consistent framework to estimate the relative merit of proposed mitigation strategies. The modeling approach involves detailed computer simulations of derailment events, for which typical validation techniques are difficult to apply. Freight train derailments are uncontrolled chain events, which are prohibitively expensive to stage and instrument; and their chaotic nature makes the unique outcome of each event extremely sensitive to its particular set of initial and bounding conditions. Furthermore, the purpose of the modeling was to estimate the global risk reduction expected in the U.S. from tank car derailments, not to predict the outcome of a specific derailment event. These challenges call into question which validation techniques are most appropriate, considering both the modeling intent as well the availability and fidelity of the data sets available for validation. This paper provides an overview of the verification and validation efforts that have been used to enhance confidence in this methodology.


Author(s):  
Shantanu Shahane ◽  
Soham Mujumdar ◽  
Namjung Kim ◽  
Pikee Priya ◽  
Narayana Aluru ◽  
...  

Die casting is a type of metal casting in which liquid metal is solidified in a reusable die. In such a complex process, measuring and controlling the process parameters is difficult. Conventional deterministic simulations are insufficient to completely estimate the effect of stochastic variation in the process parameters on product quality. In this research, a framework to simulate the effect of stochastic variation together with verification, validation, and uncertainty quantification is proposed. This framework includes high-speed numerical simulations of solidification, micro-structure and mechanical properties prediction models along with experimental inputs for calibration and validation. Both experimental data and stochastic variation in process parameters with numerical modeling are employed thus enhancing the utility of traditional numerical simulations used in die casting to have a better prediction of product quality. Although the framework is being developed and applied to die casting, it can be generalized to any manufacturing process or other engineering problems as well.


Author(s):  
George A. Hazelrigg ◽  
Georgia-Ann Klutke

In this paper, we argue that the Sandia V&V Challenge Problem is ill-posed in that the answers sought do not, mathematically, exist. This effectively discredits both the methodologies applied to the problem and the results, regardless of the approach taken. We apply our arguments to show the types of mistakes present in the papers presented in J. of VVUQ along with the Challenge Problem. Further, we show that, when the problem is properly posed, both the applicable methodology and the solution techniques are easily drawn from the well-developed mathematics of probability and decision theory. The unfortunate aspect of the Challenge Problem as currently stated is that it leads to incorrect and inappropriate mathematical approaches that should be avoided and corrected in the current literature.


Author(s):  
Francesco D’Auria ◽  
Marco Lanfredini

V&V constitutes a powerful framework to demonstrate the capability of computational tools in several technological areas. Passing V&V requirements is a needed step before applications. Let’s focus hereafter to the area of (transient) Nuclear Thermal-hydraulic (NTH) and let’s identify V1 and V2 as acronyms for Verification and Validation, respectively. Now, V1 is performed within NTH according to the best available techniques and may not suffer of important deficiencies if compared with other technological areas. This is not the case of V2. Three inherent limitations shall be mentioned in the case of Validation in NTH: 1. Validation implies comparison with experimental data: available experimental data cover a (very) small fraction of the parameter range space expected in applications of the codes; this can be easily seen if one considers data in large diameter pipe, high velocity and high pressure or high power and power density. Noticeably, the scaling issue must be addressed in the framework of V2 which may result in controversial findings. 2. Water is at the center of the attention: the physical properties of water are known to a reasonable extent as well as large variations in values of quantities like density or various derivatives are expected within the range of variation of pressure inside application fields. Although not needed for current validation purposes (e.g. validation ranges may not include a situation of critical pressure and large heat flux) physically inconsistent values predicted by empirical correlations outside validation ranges, shall not be tolerated. 3. Occurrence of complex situations like transition from two-phase critical flow to ‘Bernoulli-flow’ (e.g. towards the end of blow-down) and from film boiling to nucleate boiling, possibly crossing the minimum film boiling temperature (e.g. during reflood). Therefore, whatever can be mentioned as classical V2 is not or cannot be performed in NTH. So, the idea of the present paper is to add a component to the V&V. This component, or step in the process, is called ‘Consistency with Reality’, or with the expected phenomenological evidence. The new component may need to be characterized in some cases and is indicated by the letter ‘C’. Then, the V&V becomes V&V&C. The purpose of the paper is to clarify the motivations at the bases of the V&V&C.


Author(s):  
Jeffrey T. Fong ◽  
Pedro V. Marcal ◽  
Robert Rainsberger ◽  
Li Ma ◽  
N. Alan Heckert ◽  
...  

Errors and uncertainties in finite element method (FEM) computing can come from the following eight sources, the first four being FEM-method-specific, and the second four, model-specific: (1) Computing platform such as ABAQUS, ANSYS, COMSOL, LS-DYNA, etc.; (2) choice of element types in designing a mesh; (3) choice of mean element density or degrees of freedom (d.o.f.) in the same mesh design; (4) choice of a percent relative error (PRE) or the Rate of PRE per d.o.f. on a log-log plot to assure solution convergence; (5) uncertainty in geometric parameters of the model; (6) uncertainty in physical and material property parameters of the model; (7) uncertainty in loading parameters of the model, and (8) uncertainty in the choice of the model. By considering every FEM solution as the result of a numerical experiment for a fixed model, a purely mathematical problem, i.e., solution verification, can be addressed by first quantifying the errors and uncertainties due to the first four of the eight sources listed above, and then developing numerical algorithms and easy-to-use metrics to assess the solution accuracy of all candidate solutions. In this paper, we present a new approach to FEM verification by applying three mathematical methods and formulating three metrics for solution accuracy assessment. The three methods are: (1) A 4-parameter logistic function to find an asymptotic solution of FEM simulations; (2) the nonlinear least squares method in combination with the logistic function to find an estimate of the 95% confidence bounds of the asymptotic solution; and (3) the definition of the Jacobian of a single finite element in order to compute the Jacobians of all elements in a FEM mesh. Using those three methods, we develop numerical tools to estimate (a) the uncertainty of a FEM solution at one billion d.o.f., (b) the gain in the rate of PRE per d.o.f. as the asymptotic solution approaches very large d.o.f.’s, and (c) the estimated mean of the Jacobian distribution (mJ) of a given mesh design. Those three quantities are shown to be useful metrics to assess the accuracy of candidate solutions in order to arrive at a so-called “best” estimate with uncertainty quantification. Our results include calibration of those three metrics using problems of known analytical solutions and the application of the metrics to sample problems, of which no theoretical solution is known to exist.


Author(s):  
Kevin Irick ◽  
Nima Fathi

The evaluation of effective material properties in heterogeneous materials (e.g., composites or multicomponent structures) has direct relevance to a vast number of applications, including nuclear fuel assembly, electronic packaging, municipal solid waste, and others. The work described in this paper is devoted to the numerical verification assessment of the thermal behavior of porous materials obtained from thermal modeling and simulation. Two-dimensional, steady state analyses were conducted on unit cell nano-porous media models using the finite element method (FEM). The effective thermal conductivity of the structures was examined, encompassing a range of porosity. The geometries of the models were generated based on ordered cylindrical pores in six different porosities. The dimensionless effective thermal conductivity was compared in all simulated cases. In this investigation, the method of manufactured solutions (MMS) was used to perform code verification, and the grid convergence index (GCI) is employed to estimate discretization uncertainty (solution verification). The system response quantity (SRQ) under investigation is the dimensionless effective thermal conductivity across the unit cell. Code verification concludes an approximately second order accurate solver. It was found that the introduction of porosity to the material reduces effective thermal conductivity, as anticipated. This approach can be readily generalized to study a wide variety of porous solids from nano-structured materials to geological structures.


Author(s):  
Patricio Peralta ◽  
Rafael O. Ruiz ◽  
Viviana Meruane

The interest of this work is to describe a framework that allows the use of the well-known dynamic estimators in piezoelectric harvester (deterministic performance estimators) but taking into account the random error associated to the mathematical model and the uncertainties on the model parameters. The framework presented could be employed to perform Posterior Robust Stochastic Analysis, which is the case when the harvester can be tested or it is already installed and the experimental data is available. In particular, it is introduced a procedure to update the electromechanical properties of PEHs based on Bayesian updating techniques. The mean of the updated electromechanical properties are identified adopting a Maximum a Posteriori estimate while the probability density function associated is obtained by applying a Laplaces asymptotic approximation (updated properties could be expressed as a mean value together a band of confidence). The procedure is exemplified using the experimental characterization of 20 PEHs, all of them with same nominal characteristics. Results show the capability of the procedure to update not only the electromechanical properties of each PEH (mandatory information for the prediction of a particular PEH) but also the characteristics of the whole sample of harvesters (mandatory information for design purposes). The results reveal the importance to include the model parameter uncertainties in order to generate robust predictive tools in energy harvesting. In that sense, the present framework constitutes a powerful tool in the robust design and prediction of piezoelectric energy harvesters performance.


Author(s):  
Gregory A. Banyay ◽  
Stephen D. Smith ◽  
Jason S. Young

The structures associated with the nuclear steam supply system (NSSS) of a pressurized water reactor (PWR) warrant evaluation of various non-stationary loading conditions which could occur over the life of a nuclear power plant. These loading conditions include those associated with a loss of coolant accident and seismic event. The dynamic structural system is represented by a finite element model consisting of significant epistemic and aleatory uncertainties in the physical parameters. To provide an enhanced understanding of the influence of these uncertainties on model results, a sensitivity analysis is performed. This work demonstrates the construction of a computational design of experiment which runs the finite element model a sufficient number of times to train and verify a unique aggregate surrogate model. Adaptive sampling is employed in order to reduce the overall computational burden. The surrogate model is then used to perform both global and local sensitivity analyses.


Author(s):  
Michael Carolan ◽  
Benjamin Perlman ◽  
Francisco González

The U.S. Department of Transportation’s Federal Railroad Administration (FRA) has sponsored a series of full-scale dynamic shell impact tests to railroad tank cars. Currently, there are no required finite element (FE) model validation criteria or procedures in the field of railroad tank car puncture testing and simulation. Within the shell impact testing program sponsored by FRA, comparisons made between test measurements and simulation results have included the overall force-time or force-indentation histories, the puncture/non-puncture outcomes, the rigid body motions of the tank car, the internal pressures within the lading, and the energy absorbed by the tank during the impact. While qualitative comparisons (e.g. the shapes of the indentation) and quantitative comparisons (e.g. peak impact forces) have been made between tests and simulations, there are currently no requirements or guidelines on which specific behaviors should be compared, or what measurable level of agreement would be acceptable demonstration of model validation. It is desirable that a framework for model validation, including well-defined criteria for comparison, be developed or adopted if simulation is to be used without companion shell impact testing for future tank car development. One of the challenges to developing model validation criteria and procedures for tank car shell puncture is the number of complex behaviors encountered in this problem, and the variety of approaches that could be used in simulating these behaviors. The FE models used to simulate tank car shell impacts include several complex behaviors, each of which can introduce uncertainty into the overall response of the model. These behaviors include dynamic impacts, non-linear steel material behavior, including ductile tearing, two-phase (water and air) fluid-structure interaction, and contact between rigid and deformable bodies. Several candidate qualitative and quantitative comparisons of test measurements and simulations results are discussed in this paper. They are applied to two recently-completed shell impact tests of railroad tank cars sponsored by FRA. For each test, companion FE simulation was performed by the Volpe National Transportation Systems Center. The process of FE model development, including material characterization, is discussed in detail for each FE model. For each test, the test objectives, procedures, and key instrumentation are summarized. For each set of test and simulations, several corresponding results are compared between the test measurements and the simulation results. Additionally, this paper includes discussion of approaches to model validation employed in other industries or areas of transportation where similar modeling aspects have been encountered.


Author(s):  
Paul Gardner ◽  
Charles Lord ◽  
Robert J. Barthorpe

Probabilistic modelling methods are increasingly being employed in engineering applications. These approaches make inferences about the distribution, or summary statistical moments, for output quantities. A challenge in applying probabilistic models is validating output distributions. An ideal validation metric is one that intuitively provides information on key divergences between the output and validation distributions. Furthermore, it should be interpretable across different problems in order to informatively select the appropriate statistical method. In this paper, two families of measures for quantifying differences between distributions are compared: f-divergence and integral probability metrics (IPMs). Discussions and evaluation of these measures as validation metrics are performed with comments on ease of computation, interpretability and quantity of information provided.


Sign in / Sign up

Export Citation Format

Share Document