Validation of computational models with multiple, repeated, and correlated functional responses for a dynamic system requires the consideration of uncertainty quantification and propagation, multivariate data correlation, and objective robust metrics. This paper presents a new method of model validation under uncertainty to address these critical issues. Three key technologies of this new method are uncertainty quantification and propagation using statistical data analysis, probabilistic principal component analysis (PPCA), and interval-based Bayesian hypothesis testing. Statistical data analysis is used to quantify the variabilities of the repeated tests and computer-aided engineering (CAE) model results. The differences between the mean values of test and CAE data are extracted as validation features, and the PPCA is employed to handle multivariate correlation and to reduce the dimension of the multivariate difference curves. The variabilities of the repeated test and CAE data are propagated through the data transformation to the PPCA space. In addition, physics-based thresholds are defined and transformed to the PPCA space. Finally, interval-based Bayesian hypothesis testing is conducted on the reduced difference data to assess the model validity under uncertainty. A real-world dynamic system example which has one set of the repeated test data and two stochastic CAE models is used to demonstrate this new approach.