scholarly journals Estimating Certain Integral Probability Metrics (IPMs) Is as Hard as Estimating under the IPMs

2020 ◽  
Author(s):  
Tengyuan Liang
Author(s):  
Ievgen Redko ◽  
Amaury Habrard ◽  
Emilie Morvant ◽  
Marc Sebban ◽  
Younès Bennani

Author(s):  
Bharath K. Sriperumbudur ◽  
Kenji Fukumizu ◽  
Arthur Gretton ◽  
Bernhard Scholkopf ◽  
Gert R. G. Lanckriet

Author(s):  
Paul Gardner ◽  
Charles Lord ◽  
Robert J. Barthorpe

Abstract Probabilistic modeling methods are increasingly being employed in engineering applications. These approaches make inferences about the distribution for output quantities of interest. A challenge in applying probabilistic computer models (simulators) is validating output distributions against samples from observational data. An ideal validation metric is one that intuitively provides information on key differences between the simulator output and observational distributions, such as statistical distances/divergences. Within the literature, only a small set of statistical distances/divergences have been utilized for this task; often selected based on user experience and without reference to the wider variety available. As a result, this paper offers a unifying framework of statistical distances/divergences, categorizing those implemented within the literature, providing a greater understanding of their benefits, and offering new potential measures as validation metrics. In this paper, two families of measures for quantifying differences between distributions, that encompass the existing statistical distances/divergences within the literature, are analyzed: f-divergence and integral probability metrics (IPMs). Specific measures from these families are highlighted, providing an assessment of current and new validation metrics, with a discussion of their merits in determining simulator adequacy, offering validation metrics with greater sensitivity in quantifying differences across the range of probability mass.


1997 ◽  
Vol 29 (2) ◽  
pp. 429-443 ◽  
Author(s):  
Alfred Müller

We consider probability metrics of the following type: for a class of functions and probability measures P, Q we define A unified study of such integral probability metrics is given. We characterize the maximal class of functions that generates such a metric. Further, we show how some interesting properties of these probability metrics arise directly from conditions on the generating class of functions. The results are illustrated by several examples, including the Kolmogorov metric, the Dudley metric and the stop-loss metric.


1997 ◽  
Vol 29 (02) ◽  
pp. 429-443 ◽  
Author(s):  
Alfred Müller

We consider probability metrics of the following type: for a class of functions and probability measures P, Q we define A unified study of such integral probability metrics is given. We characterize the maximal class of functions that generates such a metric. Further, we show how some interesting properties of these probability metrics arise directly from conditions on the generating class of functions. The results are illustrated by several examples, including the Kolmogorov metric, the Dudley metric and the stop-loss metric.


Author(s):  
Paul Gardner ◽  
Charles Lord ◽  
Robert J. Barthorpe

Probabilistic modelling methods are increasingly being employed in engineering applications. These approaches make inferences about the distribution, or summary statistical moments, for output quantities. A challenge in applying probabilistic models is validating output distributions. An ideal validation metric is one that intuitively provides information on key divergences between the output and validation distributions. Furthermore, it should be interpretable across different problems in order to informatively select the appropriate statistical method. In this paper, two families of measures for quantifying differences between distributions are compared: f-divergence and integral probability metrics (IPMs). Discussions and evaluation of these measures as validation metrics are performed with comments on ease of computation, interpretability and quantity of information provided.


2012 ◽  
Vol 6 (0) ◽  
pp. 1550-1599 ◽  
Author(s):  
Bharath K. Sriperumbudur ◽  
Kenji Fukumizu ◽  
Arthur Gretton ◽  
Bernhard Schölkopf ◽  
Gert R. G. Lanckriet

Author(s):  
M. Hoffhues ◽  
W. Römisch ◽  
T. M. Surowiec

AbstractThe vast majority of stochastic optimization problems require the approximation of the underlying probability measure, e.g., by sampling or using observations. It is therefore crucial to understand the dependence of the optimal value and optimal solutions on these approximations as the sample size increases or more data becomes available. Due to the weak convergence properties of sequences of probability measures, there is no guarantee that these quantities will exhibit favorable asymptotic properties. We consider a class of infinite-dimensional stochastic optimization problems inspired by recent work on PDE-constrained optimization as well as functional data analysis. For this class of problems, we provide both qualitative and quantitative stability results on the optimal value and optimal solutions. In both cases, we make use of the method of probability metrics. The optimal values are shown to be Lipschitz continuous with respect to a minimal information metric and consequently, under further regularity assumptions, with respect to certain Fortet-Mourier and Wasserstein metrics. We prove that even in the most favorable setting, the solutions are at best Hölder continuous with respect to changes in the underlying measure. The theoretical results are tested in the context of Monte Carlo approximation for a numerical example involving PDE-constrained optimization under uncertainty.


Sign in / Sign up

Export Citation Format

Share Document