scholarly journals Towards improving the predictive capability of computer simulations by integrating inverse Uncertainty Quantification and quantitative validation with Bayesian hypothesis testing

2021 ◽  
Vol 383 ◽  
pp. 111423
Author(s):  
Ziyu Xie ◽  
Farah Alsafadi ◽  
Xu Wu
2012 ◽  
Vol 134 (3) ◽  
Author(s):  
Zhenfei Zhan ◽  
Yan Fu ◽  
Ren-Jye Yang ◽  
Yinghong Peng

Validation of computational models with multiple, repeated, and correlated functional responses for a dynamic system requires the consideration of uncertainty quantification and propagation, multivariate data correlation, and objective robust metrics. This paper presents a new method of model validation under uncertainty to address these critical issues. Three key technologies of this new method are uncertainty quantification and propagation using statistical data analysis, probabilistic principal component analysis (PPCA), and interval-based Bayesian hypothesis testing. Statistical data analysis is used to quantify the variabilities of the repeated tests and computer-aided engineering (CAE) model results. The differences between the mean values of test and CAE data are extracted as validation features, and the PPCA is employed to handle multivariate correlation and to reduce the dimension of the multivariate difference curves. The variabilities of the repeated test and CAE data are propagated through the data transformation to the PPCA space. In addition, physics-based thresholds are defined and transformed to the PPCA space. Finally, interval-based Bayesian hypothesis testing is conducted on the reduced difference data to assess the model validity under uncertainty. A real-world dynamic system example which has one set of the repeated test data and two stochastic CAE models is used to demonstrate this new approach.


Author(s):  
Alexander Ly ◽  
Eric-Jan Wagenmakers

AbstractThe “Full Bayesian Significance Test e-value”, henceforth FBST ev, has received increasing attention across a range of disciplines including psychology. We show that the FBST ev leads to four problems: (1) the FBST ev cannot quantify evidence in favor of a null hypothesis and therefore also cannot discriminate “evidence of absence” from “absence of evidence”; (2) the FBST ev is susceptible to sampling to a foregone conclusion; (3) the FBST ev violates the principle of predictive irrelevance, such that it is affected by data that are equally likely to occur under the null hypothesis and the alternative hypothesis; (4) the FBST ev suffers from the Jeffreys-Lindley paradox in that it does not include a correction for selection. These problems also plague the frequentist p-value. We conclude that although the FBST ev may be an improvement over the p-value, it does not provide a reasonable measure of evidence against the null hypothesis.


2002 ◽  
Vol 70 (3) ◽  
pp. 351 ◽  
Author(s):  
Jose M. Bernardo ◽  
Raul Rueda

Sign in / Sign up

Export Citation Format

Share Document