Uncertainty Quantification of Convolutional Neural Network Metamodel with Image and Numerical Data

2022 ◽  
Author(s):  
Xiaoping Du
2020 ◽  
Vol 149 ◽  
pp. 103835 ◽  
Author(s):  
Diogo Stuani Alves ◽  
Gregory Bregion Daniel ◽  
Helio Fiori de Castro ◽  
Tiago Henrique Machado ◽  
Katia Lucchesi Cavalca ◽  
...  

2020 ◽  
Author(s):  
Pushkar Khairnar ◽  
Ponkrshnan Thiagarajan ◽  
Susanta Ghosh

Convolutional neural network (CNN) based classification models have been successfully used on histopathological images for the detection of diseases. Despite its success, CNN may yield erroneous or overfitted results when the data is not sufficiently large or is biased. To overcome these limitations of CNN and to provide uncertainty quantification Bayesian CNN is recently proposed. However, we show that Bayesian-CNN still suffers from inaccuracies, especially in negative predictions. In the present work, we extend the Bayesian-CNN to improve accuracy and the rate of convergence. The proposed model is called modified Bayesian-CNN. The novelty of the proposed model lies in an adaptive activation function that contains a learnable parameter for each of the neurons. This adaptive activation function dynamically changes the loss function thereby providing faster convergence and better accuracy. The uncertainties associated with the predictions are obtained since the model learns a probability distribution on the network parameters. It reduces overfitting through an ensemble averaging over networks, which in turn improves accuracy on the unknown data. The proposed model demonstrates significant improvement by nearly eliminating overfitting and remarkably reducing (about 38%) the number of false-negative predictions. We found that the proposed model predicts higher uncertainty for images having features of both the classes. The uncertainty in the predictions of individual images can be used to decide when further human-expert intervention is needed. These findings have the potential to advance the state-of-the-art machine learning-based automatic classification for histopathological images.


SPE Journal ◽  
1900 ◽  
pp. 1-29
Author(s):  
Nanzhe Wang ◽  
Haibin Chang ◽  
Dongxiao Zhang

Summary A deep learning framework, called the theory-guided convolutional neural network (TgCNN), is developed for efficient uncertainty quantification and data assimilation of reservoir flow with uncertain model parameters. The performance of the proposed framework in terms of accuracy and computational efficiency is assessed by comparing it to classical approaches in reservoir simulation. The essence of the TgCNN is to take into consideration both the available data and underlying physical/engineering principles. The stochastic parameter fields and time matrix comprise the input of the convolutional neural network (CNN), whereas the output is the quantity of interest (e.g., pressure, saturation, etc.). The TgCNN is trained with available data while being simultaneously guided by theory (e.g., governing equations, other physical constraints, and engineering controls) of the underlying problem. The trained TgCNN serves as a surrogate that can predict the solutions of the reservoir flow problem with new stochastic parameter fields. Such approaches, including the Monte Carlo (MC) method and the iterative ensemble smoother (IES) method, can then be used to perform uncertainty quantification and data assimilation efficiently based on the TgCNN surrogate, respectively. The proposed paradigm is evaluated with dynamic reservoir flow problems. The results demonstrate that the TgCNN surrogate can be built with a relatively small number of training data and even in a label-free manner, which can approximate the relationship between model inputs and outputs with high accuracy. The TgCNN surrogate is then used for uncertainty quantification and data assimilation of reservoir flow problems, which achieves satisfactory accuracy and higher efficiency compared with state-of-the-art approaches. The novelty of the work lies in the ability to incorporate physical laws and domain knowledge into the deep learning process and achieve high accuracy with limited training data. The trained surrogate can significantly improve the efficiency of uncertainty quantification and data assimilation processes. NOTE: This paper is published as part of the 2021 Reservoir Simulation Conference Special Issue.


2020 ◽  
Author(s):  
S Kashin ◽  
D Zavyalov ◽  
A Rusakov ◽  
V Khryashchev ◽  
A Lebedev

2020 ◽  
Vol 2020 (10) ◽  
pp. 181-1-181-7
Author(s):  
Takahiro Kudo ◽  
Takanori Fujisawa ◽  
Takuro Yamaguchi ◽  
Masaaki Ikehara

Image deconvolution has been an important issue recently. It has two kinds of approaches: non-blind and blind. Non-blind deconvolution is a classic problem of image deblurring, which assumes that the PSF is known and does not change universally in space. Recently, Convolutional Neural Network (CNN) has been used for non-blind deconvolution. Though CNNs can deal with complex changes for unknown images, some CNN-based conventional methods can only handle small PSFs and does not consider the use of large PSFs in the real world. In this paper we propose a non-blind deconvolution framework based on a CNN that can remove large scale ringing in a deblurred image. Our method has three key points. The first is that our network architecture is able to preserve both large and small features in the image. The second is that the training dataset is created to preserve the details. The third is that we extend the images to minimize the effects of large ringing on the image borders. In our experiments, we used three kinds of large PSFs and were able to observe high-precision results from our method both quantitatively and qualitatively.


Sign in / Sign up

Export Citation Format

Share Document