Deep Learning for Handling Kernel/model Uncertainty in Image Deconvolution

Author(s):  
Yuesong Nan ◽  
Hui Ji
2021 ◽  
Author(s):  
Michael Merz ◽  
Mario V. Wuthrich

2020 ◽  
Vol 24 ◽  
pp. 185-205
Author(s):  
Cristián Serpell ◽  
Ignacio A. Araya ◽  
Carlos Valle ◽  
Héctor Allende

In recent years, deep learning models have been developed to address probabilistic forecasting tasks, assuming an implicit stochastic process that relates past observed values to uncertain future values. These models are capable of capturing the inherent uncertainty of the underlying process, but they ignore the model uncertainty that comes from the fact of not having infinite data. This work proposes addressing the model uncertainty problem using Monte Carlo dropout, a variational approach that assigns distributions to the weights of a neural network instead of simply using fixed values. This allows to easily adapt common deep learning models currently in use to produce better probabilistic forecasting estimates, in terms of their consideration of uncertainty. The proposal is validated for prediction intervals estimation on seven energy time series, using a popular probabilistic model called Mean Variance Estimation (MVE), as the deep model adapted using the technique.


Author(s):  
Yang Zhao ◽  
Wei Tian ◽  
Hong Cheng

AbstractWith the fast-developing deep learning models in the field of autonomous driving, the research on the uncertainty estimation of deep learning models has also prevailed. Herein, a pyramid Bayesian deep learning method is proposed for the model uncertainty evaluation of semantic segmentation. Semantic segmentation is one of the most important perception problems in understanding visual scene, which is critical for autonomous driving. This study to optimize Bayesian SegNet for uncertainty evaluation. This paper first simplifies the network structure of Bayesian SegNet by reducing the number of MC-Dropout layer and then introduces the pyramid pooling module to improve the performance of Bayesian SegNet. mIoU and mPAvPU are used as evaluation matrics to test the proposed method on the public Cityscapes dataset. The experimental results show that the proposed method improves the sampling effect of the Bayesian SegNet, shortens the sampling time, and improves the network performance.


2021 ◽  
Author(s):  
Lei Xu ◽  
Nengcheng Chen ◽  
Chao Yang

Abstract. Precipitation forecasting is an important mission in weather science. In recent years, data-driven precipitation forecasting techniques could complement numerical prediction, such as precipitation nowcasting, monthly precipitation projection and extreme precipitation event identification. In data-driven precipitation forecasting, the predictive uncertainty arises mainly from data and model uncertainties. Current deep learning forecasting methods could model the parametric uncertainty by random sampling from the parameters. However, the data uncertainty is usually ignored in the forecasting process and the derivation of predictive uncertainty is incomplete. In this study, the input data uncertainty, target data uncertainty and model uncertainty are jointly modeled in a deep learning precipitation forecasting framework to estimate the predictive uncertainty. Specifically, the data uncertainty is estimated a priori and the input uncertainty is propagated forward through model weights according to the law of error propagation. The model uncertainty is considered by sampling from the parameters and is coupled with input and target data uncertainties in the objective function during the training process. Finally, the predictive uncertainty is produced by propagating the input uncertainty and sampling the weights in the testing process. The experimental results indicate that the proposed joint uncertainty modeling and precipitation forecasting framework exhibits comparable forecasting accuracy with existing methods, while could reduce the predictive uncertainty to a large extent relative to two existing joint uncertainty modeling approaches. The developed joint uncertainty modeling method is a general uncertainty estimation approach for data-driven forecasting applications.


2021 ◽  
Author(s):  
Sadegh Sadeghi Tabas ◽  
Vidya Samadi

<p>Deep Learning (DL) is becoming an increasingly important tool to produce accurate streamflow prediction across a wide range of spatial and temporal scales. However, classical DL networks do not incorporate uncertainty information but only return a point prediction. Monte-Carlo Dropout (MC-Dropout) approach offers a mathematically grounded framework to reason about DL uncertainty which was used here as random diagonal matrices to introduce randomness to the streamflow prediction process. This study employed Recurrent Neural Networks (RNNs) to simulate daily streamflow records across a coastal plain drainage system, i.e., the Northeast Cape Fear River Basin, North Carolina, USA. We employed MC-Dropout approach with the DL algorithm to make streamflow simulation more robust to potential overfitting by introducing random perturbation during training period. Daily streamflow was calibrated during 2000-2010 and validated during 2010-2014 periods. Our results provide a unique and strong evidence that variational sampling via MC-Dropout acts as a dissimilarity detector. The MC-Dropout method successfully captured the predictive error after tuning a hyperparameter on a representative training dataset. This approach was able to mitigate the problem of representing model uncertainty in DL simulations without sacrificing computational complexity or accuracy metrics and can be used for all kind of DL-based streamflow (time-series) model training with dropout.</p>


2021 ◽  
Author(s):  
Yann Haddad ◽  
Michaël Defferrard ◽  
Gionata Ghiggi

<p>Ensemble predictions are essential to characterize the forecast uncertainty and the likelihood of an event to occur. Stochasticity in predictions comes from data and model uncertainty. In deep learning (DL), data uncertainty can be approached by training an ensemble of DL models on data subsets or by performing data augmentations (e.g., random or singular value decomposition (SVD) perturbations). Model uncertainty is typically addressed by training a DL model multiple times from different weight initializations (DeepEnsemble) or by training sub-networks by dropping weights (Dropout). Dropout is cheap but less effective, while DeepEnsemble is computationally expensive.</p><p>We propose instead to tackle model uncertainty with SWAG (Maddox et al., 2019), a method to learn stochastic weights—the sampling of which allows to draw hundreds of forecast realizations at a fraction of the cost required by DeepEnsemble. In the context of data-driven weather forecasting, we demonstrate that the SWAG ensemble has i) better deterministic skills than a single DL model trained in the usual way, and ii) approaches deterministic and probabilistic skills of DeepEnsemble at a fraction of the cost. Finally, multiSWAG (SWAG applied on top of DeepEnsemble models) provides a trade-off between computational cost, model diversity, and performance.</p><p>We believe that the method we present will become a common tool to generate large ensembles at a fraction of the current cost. Additionally, the possibility of sampling DL models allows the design of data-driven/emulated stochastic model components and sub-grid parameterizations.</p><p><strong>Reference</strong></p><p>Maddox W.J, Garipov T., Izmailov P., Vetrov D., Wilson A. G., 2019: A Simple Baseline for Bayesian Uncertainty in Deep Learning. arXiv:1902.02476</p>


Micromachines ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 1558
Author(s):  
Mikhail Makarkin ◽  
Daniil Bratashov

In modern digital microscopy, deconvolution methods are widely used to eliminate a number of image defects and increase resolution. In this review, we have divided these methods into classical, deep learning-based, and optimization-based methods. The review describes the major architectures of neural networks, such as convolutional and generative adversarial networks, autoencoders, various forms of recurrent networks, and the attention mechanism used for the deconvolution problem. Special attention is paid to deep learning as the most powerful and flexible modern approach. The review describes the major architectures of neural networks used for the deconvolution problem. We describe the difficulties in their application, such as the discrepancy between the standard loss functions and the visual content and the heterogeneity of the images. Next, we examine how to deal with this by introducing new loss functions, multiscale learning, and prior knowledge of visual content. In conclusion, a review of promising directions and further development of deconvolution methods in microscopy is given.


Sign in / Sign up

Export Citation Format

Share Document