predictive distribution
Recently Published Documents


TOTAL DOCUMENTS

206
(FIVE YEARS 68)

H-INDEX

24
(FIVE YEARS 4)

2022 ◽  
Vol 14 (1) ◽  
Author(s):  
Youngchun Kwon ◽  
Dongseon Lee ◽  
Youn-Suk Choi ◽  
Seokho Kang

AbstractIn this paper, we present a data-driven method for the uncertainty-aware prediction of chemical reaction yields. The reactants and products in a chemical reaction are represented as a set of molecular graphs. The predictive distribution of the yield is modeled as a graph neural network that directly processes a set of graphs with permutation invariance. Uncertainty-aware learning and inference are applied to the model to make accurate predictions and to evaluate their uncertainty. We demonstrate the effectiveness of the proposed method on benchmark datasets with various settings. Compared to the existing methods, the proposed method improves the prediction and uncertainty quantification performance in most settings.


Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3211
Author(s):  
Patrizia Berti ◽  
Luca Pratelli ◽  
Pietro Rigo

Let S be a Borel subset of a Polish space and F the set of bounded Borel functions f:S→R. Let an(·)=P(Xn+1∈·∣X1,…,Xn) be the n-th predictive distribution corresponding to a sequence (Xn) of S-valued random variables. If (Xn) is conditionally identically distributed, there is a random probability measure μ on S such that ∫fdan⟶a.s.∫fdμ for all f∈F. Define Dn(f)=dn∫fdan−∫fdμ for all f∈F, where dn>0 is a constant. In this note, it is shown that, under some conditions on (Xn) and with a suitable choice of dn, the finite dimensional distributions of the process Dn=Dn(f):f∈F stably converge to a Gaussian kernel with a known covariance structure. In addition, Eφ(Dn(f))∣X1,…,Xn converges in probability for all f∈F and φ∈Cb(R).


Water ◽  
2021 ◽  
Vol 13 (23) ◽  
pp. 3420
Author(s):  
Hristos Tyralis ◽  
Georgia Papacharalampous

Predictive uncertainty in hydrological modelling is quantified by using post-processing or Bayesian-based methods. The former methods are not straightforward and the latter ones are not distribution-free (i.e., assumptions on the probability distribution of the hydrological model’s output are necessary). To alleviate possible limitations related to these specific attributes, in this work we propose the calibration of the hydrological model by using the quantile loss function. By following this methodological approach, one can directly simulate pre-specified quantiles of the predictive distribution of streamflow. As a proof of concept, we apply our method in the frameworks of three hydrological models to 511 river basins in the contiguous US. We illustrate the predictive quantiles and show how an honest assessment of the predictive performance of the hydrological models can be made by using proper scoring rules. We believe that our method can help towards advancing the field of hydrological uncertainty.


Author(s):  
Jonas Busk ◽  
Peter Bjørn Jørgensen ◽  
Arghya Bhowmik ◽  
Mikkel N. Schmidt ◽  
Ole Winther ◽  
...  

Abstract Data-driven methods based on machine learning have the potential to accelerate computational analysis of atomic structures. In this context, reliable uncertainty estimates are important for assessing confidence in predictions and enabling decision making. However, machine learning models can produce badly calibrated uncertainty estimates and it is therefore crucial to detect and handle uncertainty carefully. In this work we extend a message passing neural network designed specifically for predicting properties of molecules and materials with a calibrated probabilistic predictive distribution. The method presented in this paper differs from previous work by considering both aleatoric and epistemic uncertainty in a unified framework, and by recalibrating the predictive distribution on unseen data. Through computer experiments, we show that our approach results in accurate models for predicting molecular formation energies with well calibrated uncertainty in and out of the training data distribution on two public molecular benchmark datasets, QM9 and PC9. The proposed method provides a general framework for training and evaluating neural network ensemble models that are able to produce accurate predictions of properties of molecules with well calibrated uncertainty estimates.


Mathematics ◽  
2021 ◽  
Vol 9 (22) ◽  
pp. 2921
Author(s):  
Stefano Cabras

This work proposes a semi-parametric approach to estimate the evolution of COVID-19 (SARS-CoV-2) in Spain. Considering the sequences of 14-day cumulative incidence of all Spanish regions, it combines modern Deep Learning (DL) techniques for analyzing sequences with the usual Bayesian Poisson-Gamma model for counts. The DL model provides a suitable description of the observed time series of counts, but it cannot give a reliable uncertainty quantification. The role of expert elicitation of the expected number of counts and its reliability is DL predictions’ role in the proposed modelling approach. Finally, the posterior predictive distribution of counts is obtained in a standard Bayesian analysis using the well known Poisson-Gamma model. The model allows to predict the future evolution of the sequences on all regions or estimates the consequences of eventual scenarios.


Mathematics ◽  
2021 ◽  
Vol 9 (22) ◽  
pp. 2891
Author(s):  
Federico Camerlenghi ◽  
Stefano Favaro

In the 1920s, the English philosopher W.E. Johnson introduced a characterization of the symmetric Dirichlet prior distribution in terms of its predictive distribution. This is typically referred to as Johnson’s “sufficientness” postulate, and it has been the subject of many contributions in Bayesian statistics, leading to predictive characterization for infinite-dimensional generalizations of the Dirichlet distribution, i.e., species-sampling models. In this paper, we review “sufficientness” postulates for species-sampling models, and then investigate analogous predictive characterizations for the more general feature-sampling models. In particular, we present a “sufficientness” postulate for a class of feature-sampling models referred to as Scaled Processes (SPs), and then discuss analogous characterizations in the general setup of feature-sampling models.


2021 ◽  
Vol 8 (24) ◽  
pp. 297-301
Author(s):  
Jonas Brehmer

Proper scoring rules enable decision-theoretically principled comparisons of probabilistic forecasts. New scoring rules can be constructed by identifying the predictive distribution with an element of a parametric family and then applying a known scoring rule. We introduce a condition which ensures propriety in this construction and thereby obtain novel proper scoring rules.


Forecasting ◽  
2021 ◽  
Vol 3 (4) ◽  
pp. 729-740
Author(s):  
Douglas E. Johnston

In this paper, we provide a novel Bayesian solution to forecasting extreme quantile thresholds that are dynamic in nature. This is an important problem in many fields of study including climatology, structural engineering, and finance. We utilize results from extreme value theory to provide the backdrop for developing a state-space model for the unknown parameters of the observed time-series. To solve for the requisite probability densities, we derive a Rao-Blackwellized particle filter and, most importantly, a computationally efficient, recursive solution. Using the filter, the predictive distribution of future observations, conditioned on the past data, is forecast at each time-step and used to compute extreme quantile levels. We illustrate the improvement in forecasting ability, versus traditional methods, using simulations and also apply our technique to financial market data.


Sign in / Sign up

Export Citation Format

Share Document