Our increased dependence on mathematical models for engineering design, coupled with our decreased dependence on experimental observation, leads to the obvious question — how do we know that our models are valid representations of physical processes? We test models by comparisons between model predictions and experimental observations. As our models become more complex (i.e., multiphysics models), our ability to test models over the range of possible applications becomes more difficult. This difficulty is compounded by the uncertainty that is invariably present in the experimental data used to test the model, the uncertainties in the parameters that are incorporated into the model, and the uncertainties in the model structure itself. When significant uncertainties of these types are present, evaluating model validity through graphical comparisons of model predictions to experimental observations becomes very subjective. Here we consider the impact of uncertainty and the role of uncertainty analysis in model validation. We focus on uncertainty in the model predictions due to parameter uncertainty, and on experimental uncertainty due to measurement noise. We show that characterizing these uncertainties allows us to use a meaningful metric for model testing that is less subjective than the traditional “view graph norm” or the evaluation of correlation coefficients. We demonstrate this methodology through its application to a model and experimental observations of thermally induced foam decomposition.