Parameter uncertainty quantification for a four-equation transition model using a data assimilation approach

2020 ◽  
Vol 158 ◽  
pp. 215-226
Author(s):  
Muchen Yang ◽  
Zhixiang Xiao
2019 ◽  
Vol 131 ◽  
pp. 89-101 ◽  
Author(s):  
Rohitash Chandra ◽  
Danial Azam ◽  
R. Dietmar Müller ◽  
Tristan Salles ◽  
Sally Cripps

SPE Journal ◽  
1900 ◽  
pp. 1-29
Author(s):  
Nanzhe Wang ◽  
Haibin Chang ◽  
Dongxiao Zhang

Summary A deep learning framework, called the theory-guided convolutional neural network (TgCNN), is developed for efficient uncertainty quantification and data assimilation of reservoir flow with uncertain model parameters. The performance of the proposed framework in terms of accuracy and computational efficiency is assessed by comparing it to classical approaches in reservoir simulation. The essence of the TgCNN is to take into consideration both the available data and underlying physical/engineering principles. The stochastic parameter fields and time matrix comprise the input of the convolutional neural network (CNN), whereas the output is the quantity of interest (e.g., pressure, saturation, etc.). The TgCNN is trained with available data while being simultaneously guided by theory (e.g., governing equations, other physical constraints, and engineering controls) of the underlying problem. The trained TgCNN serves as a surrogate that can predict the solutions of the reservoir flow problem with new stochastic parameter fields. Such approaches, including the Monte Carlo (MC) method and the iterative ensemble smoother (IES) method, can then be used to perform uncertainty quantification and data assimilation efficiently based on the TgCNN surrogate, respectively. The proposed paradigm is evaluated with dynamic reservoir flow problems. The results demonstrate that the TgCNN surrogate can be built with a relatively small number of training data and even in a label-free manner, which can approximate the relationship between model inputs and outputs with high accuracy. The TgCNN surrogate is then used for uncertainty quantification and data assimilation of reservoir flow problems, which achieves satisfactory accuracy and higher efficiency compared with state-of-the-art approaches. The novelty of the work lies in the ability to incorporate physical laws and domain knowledge into the deep learning process and achieve high accuracy with limited training data. The trained surrogate can significantly improve the efficiency of uncertainty quantification and data assimilation processes. NOTE: This paper is published as part of the 2021 Reservoir Simulation Conference Special Issue.


Author(s):  
Zhen Jiang ◽  
Wei Chen ◽  
Daniel W. Apley

In physics-based engineering modeling and uncertainty quantification, distinguishing the effects of two main sources of uncertainty — calibration parameter uncertainty and model discrepancy — is challenging. Previous research has shown that identifiability can sometimes be improved by experimentally measuring multiple responses of the system that share a mutual dependence on a common set of calibration parameters. In this paper, we address the issue of how to select the most appropriate subset of responses to measure experimentally, to best enhance identifiability. We propose a preposterior analysis approach that, prior to conducting the physical experiments but after conducting computer simulations, can predict the degree of identifiability that will result using different subsets of responses to measure experimentally. We quantify identifiability via the posterior covariance of the calibration parameters, and predict it via the preposterior covariance from a modular Bayesian Monte Carlo analysis of a multi-response Gaussian process model. The proposed method is applied to a simply supported beam example to select two out of six responses to best improve identifiability. The estimated preposterior covariance is compared to the actual posterior covariance to demonstrate the effectiveness of the method.


2021 ◽  
Vol 247 ◽  
pp. 09020
Author(s):  
J.R Dixon ◽  
B.A. Lindley ◽  
T. Taylor ◽  
G.T. Parks

Best estimate plus uncertainty is the leading methodology to validate existing safety margins. It remains a challenge to develop and license these approaches, in part due to the high dimensionality of system codes. Uncertainty quantification is an active area of research to develop appropriate methods for propagating uncertainties, offering greater scientific reason, dimensionality reduction and minimising reliance on expert judgement. Inverse uncertainty quantification is required to infer a best estimate back on the input parameters and reduce the uncertainties, but it is challenging to capture the full covariance and sensitivity matrices. Bayesian inverse strategies remain attractive due to their predictive modelling and reduced uncertainty capabilities, leading to dramatic model improvements and validation of experiments. This paper uses state-of-the-art data assimilation techniques to obtain a best estimate of parameters critical to plant safety. Data assimilation can combine computational, benchmark and experimental measurements, propagate sparse covariance and sensitivity matrices, treat non-linear applications and accommodate discrepancies. The methodology is further demonstrated through application to hot zero power tests in a pressurised water reactor (PWR) performed using the BEAVRS benchmark with Latin hypercube sampling of reactor parameters to determine responses. WIMS 11 (dv23) and PANTHER (V.5:6:4) are used as the coupled neutronics and thermal-hydraulics codes; both are used extensively to model PWRs. Results demonstrate updated best estimate parameters and reduced uncertainties, with comparisons between posterior distributions generated using maximum entropy principle and cost functional minimisation techniques illustrated in recent conferences. Future work will improve the Bayesian inverse framework with the introduction of higher-order sensitivities.


2021 ◽  
Author(s):  
Juan Ruiz ◽  
Maximiliano Sacco ◽  
Yicun Zhen ◽  
Pierre Tandeo ◽  
Manuel Pulido

<p>Quantifying forecast uncertainty is a key aspect of state-of-the-art data assimilation systems which has a large impact on the quality of the analysis and then the following forecast. In recent years, most operational data assimilation systems incorporate state-dependent uncertainty quantification approaches based on 4-dimensional variational approaches, ensemble-based approaches, or their combination. However, these quantifications of state-dependent uncertainties have a large computational cost. Machine learning techniques consist of trainable statistical models that can represent complex functional dependencies among different groups of variables. In this work, we use a fully connected two hidden layer neural network for the state-dependent quantification of forecast uncertainty in the context of data assimilation. The input to the network is a set of three consecutive forecasted states centered at the desired lead time and the network’s output is a corrected forecasted state and an estimation of its uncertainty. We train the network using a loss function based on the observation likelihood and a large database of forecasts and their corresponding analysis. We perform observing system simulation experiments using the Lorenz 96 model as a proof-of-concept and for an evaluation of the technique in comparison with classic ensemble-based approaches.</p><p> Results show that our approach can produce state-dependent estimations of the forecast uncertainty without the need for an ensemble of states (at a much lower computational cost),  particularly in the presence of model errors. This opens opportunities for the development of a new type of hybrid data assimilation system combining the capabilities of machine learning and ensembles.</p>


Sign in / Sign up

Export Citation Format

Share Document