Preposterior Analysis to Select Experimental Responses for Improving Identifiability in Model Uncertainty Quantification

Author(s):  
Zhen Jiang ◽  
Wei Chen ◽  
Daniel W. Apley

In physics-based engineering modeling and uncertainty quantification, distinguishing the effects of two main sources of uncertainty — calibration parameter uncertainty and model discrepancy — is challenging. Previous research has shown that identifiability can sometimes be improved by experimentally measuring multiple responses of the system that share a mutual dependence on a common set of calibration parameters. In this paper, we address the issue of how to select the most appropriate subset of responses to measure experimentally, to best enhance identifiability. We propose a preposterior analysis approach that, prior to conducting the physical experiments but after conducting computer simulations, can predict the degree of identifiability that will result using different subsets of responses to measure experimentally. We quantify identifiability via the posterior covariance of the calibration parameters, and predict it via the preposterior covariance from a modular Bayesian Monte Carlo analysis of a multi-response Gaussian process model. The proposed method is applied to a simply supported beam example to select two out of six responses to best improve identifiability. The estimated preposterior covariance is compared to the actual posterior covariance to demonstrate the effectiveness of the method.

2012 ◽  
Vol 134 (10) ◽  
Author(s):  
Paul D. Arendt ◽  
Daniel W. Apley ◽  
Wei Chen ◽  
David Lamb ◽  
David Gorsich

In physics-based engineering modeling, the two primary sources of model uncertainty, which account for the differences between computer models and physical experiments, are parameter uncertainty and model discrepancy. Distinguishing the effects of the two sources of uncertainty can be challenging. For situations in which identifiability cannot be achieved using only a single response, we propose to improve identifiability by using multiple responses that share a mutual dependence on a common set of calibration parameters. To that end, we extend the single response modular Bayesian approach for calculating posterior distributions of the calibration parameters and the discrepancy function to multiple responses. Using an engineering example, we demonstrate that including multiple responses can improve identifiability (as measured by posterior standard deviations) by an amount that ranges from minimal to substantial, depending on the characteristics of the specific responses that are combined.


2019 ◽  
Vol 14 (5) ◽  
Author(s):  
Baoqiang Zhang ◽  
Qintao Guo ◽  
Yan Wang ◽  
Ming Zhan

Extensive research has been devoted to engineering analysis in the presence of only parameter uncertainty. However, in modeling process, model-form uncertainty arises inevitably due to the lack of information and knowledge, as well as assumptions and simplifications made in the models. It is undoubted that model-form uncertainty cannot be ignored. To better quantify model-form uncertainty in vibration systems with multiple degrees-of-freedom, in this paper, fractional derivatives as model-form hyperparameters are introduced. A new general model calibration approach is proposed to separate and reduce model-form and parameter uncertainty based on multiple fractional frequency response functions (FFRFs). The new calibration method is verified through a simulated system with two degrees-of-freedom. The studies demonstrate that the new model-form and parameter uncertainty quantification method is robust.


Author(s):  
Manuel Arias Chao ◽  
Darrel S. Lilley ◽  
Peter Mathé ◽  
Volker Schloßhauer

Calibration and uncertainty quantification for gas turbine (GT) performance models is a key activity for GT manufacturers. The adjustment between the numerical model and measured GT data is obtained with a calibration technique. Since both, the calibration parameters and the measurement data are uncertain the calibration process is intrinsically stochastic. Traditional approaches for calibration of a numerical GT model are deterministic. Therefore, quantification of the remaining uncertainty of the calibrated GT model is not clearly derived. However, there is the business need to provide the probability of the GT performance predictions at tested or untested conditions. Furthermore, a GT performance prediction might be required for a new GT model when no test data for this model are available yet. In this case, quantification of the uncertainty of the baseline GT, upon which the new development is based on, and propagation of the design uncertainty for the new GT is required for risk assessment and decision making reasons. By using as a benchmark a GT model, the calibration problem is discussed and several possible model calibration methodologies are presented. Uncertainty quantification based on both a conventional least squares method and a Bayesian approach will be presented and discussed. For the general nonlinear model a fully Bayesian approach is conducted, and the posterior of the calibration problem is computed based on a Markov Chain Monte Carlo simulation using a Metropolis-Hastings sampling scheme. When considering the calibration parameters dependent on operating conditions, a novel formulation of the GT calibration problem is presented in terms of a Gaussian process regression problem.


2021 ◽  
pp. 251-262
Author(s):  
Timothy E. Essington

The chapter “Sensitivity Analysis” reviews why sensitivity analysis is a critical component of mathematical modeling, and the different ways of approaching it. A sensitivity analysis is an attempt to identify the parts of the model (i.e. structure, parameter values) that are most important for governing the output. It is an important part of modeling because it is used to quantify the degree of uncertainty in the model prediction and, in many cases, is the main goal of the model (i.e. the model was developed to identify the most important ecological processes). The chapter covers the idea of “local” versus “global” sensitivity analysis via individual parameter perturbation, and how interactive effects of parameters can be revealed via Monte Carlo analysis. Structural versus parameter uncertainty is also explained and explored.


2019 ◽  
Vol 29 (08) ◽  
pp. 2050132
Author(s):  
Muhammed Emin Başak

Active elements are fundamental circuits for a wide scope of scientific and industrial processes. Many researchers have examined active devices to implement filters, oscillators, rectifiers, and converters. This paper presents the current differencing operational amplifier (CDOA) as an active element, firstly implemented with CMOS transistors. The input part of this circuit is a current differencing unit and the conventional operational amplifier (Op-Amp) pursues it. A new realization of a notch filter consists of CDOA is suggested. Voltage-mode band-pass filter and current-mode notch filter are presented as a different filter applications. Simulation results using TSMC 0.18-[Formula: see text]m CMOS process model are used to verify the theoretical analyses. The sensitivity, noise, total harmonic distortion (THD) and the Monte Carlo analysis have been performed to demonstrate the effectiveness of the proposed active element and notch filter.


Author(s):  
Paul D. Arendt ◽  
Wei Chen ◽  
Daniel W. Apley

Model updating, which utilizes mathematical means to combine model simulations with physical observations for improving model predictions, has been viewed as an integral part of a model validation process. While calibration is often used to “tune” uncertain model parameters, bias-correction has been used to capture model inadequacy due to a lack of knowledge of the physics of a problem. While both sources of uncertainty co-exist, these two techniques are often implemented separately in model updating. This paper examines existing approaches to model updating and presents a modular Bayesian approach as a comprehensive framework that accounts for many sources of uncertainty in a typical model updating process and provides stochastic predictions for the purpose of design. In addition to the uncertainty in the computer model parameters and the computer model itself, this framework accounts for the experimental uncertainty and the uncertainty due to the lack of data in both computer simulations and physical experiments using the Gaussian process model. Several challenges are apparent in the implementation of the modular Bayesian approach. We argue that distinguishing between uncertain model parameters (calibration) and systematic inadequacies (bias correction) is often quite challenging due to an identifiability issue. We present several explanations and examples of this issue and bring up the needs of future research in distinguishing between the two sources of uncertainty.


Sign in / Sign up

Export Citation Format

Share Document