scholarly journals A Unifying Framework for Probabilistic Validation Metrics

Author(s):  
Paul Gardner ◽  
Charles Lord ◽  
Robert J. Barthorpe

Abstract Probabilistic modeling methods are increasingly being employed in engineering applications. These approaches make inferences about the distribution for output quantities of interest. A challenge in applying probabilistic computer models (simulators) is validating output distributions against samples from observational data. An ideal validation metric is one that intuitively provides information on key differences between the simulator output and observational distributions, such as statistical distances/divergences. Within the literature, only a small set of statistical distances/divergences have been utilized for this task; often selected based on user experience and without reference to the wider variety available. As a result, this paper offers a unifying framework of statistical distances/divergences, categorizing those implemented within the literature, providing a greater understanding of their benefits, and offering new potential measures as validation metrics. In this paper, two families of measures for quantifying differences between distributions, that encompass the existing statistical distances/divergences within the literature, are analyzed: f-divergence and integral probability metrics (IPMs). Specific measures from these families are highlighted, providing an assessment of current and new validation metrics, with a discussion of their merits in determining simulator adequacy, offering validation metrics with greater sensitivity in quantifying differences across the range of probability mass.

Author(s):  
Paul Gardner ◽  
Charles Lord ◽  
Robert J. Barthorpe

Probabilistic modelling methods are increasingly being employed in engineering applications. These approaches make inferences about the distribution, or summary statistical moments, for output quantities. A challenge in applying probabilistic models is validating output distributions. An ideal validation metric is one that intuitively provides information on key divergences between the output and validation distributions. Furthermore, it should be interpretable across different problems in order to informatively select the appropriate statistical method. In this paper, two families of measures for quantifying differences between distributions are compared: f-divergence and integral probability metrics (IPMs). Discussions and evaluation of these measures as validation metrics are performed with comments on ease of computation, interpretability and quantity of information provided.


Author(s):  
Ievgen Redko ◽  
Amaury Habrard ◽  
Emilie Morvant ◽  
Marc Sebban ◽  
Younès Bennani

Author(s):  
Bharath K. Sriperumbudur ◽  
Kenji Fukumizu ◽  
Arthur Gretton ◽  
Bernhard Scholkopf ◽  
Gert R. G. Lanckriet

2021 ◽  
Author(s):  
S. Bidier ◽  
U. Khristenko ◽  
R. Tosi ◽  
R. Rossi ◽  
C. Soriano

This deliverable report focuses on the main Uncertainty Quanti cation (UQ) results obtained within the EXAscale Quanti cation of Uncertainties for Technology and Science Simulation (ExaQUte) project. Details on the turbulent wind inlet generator, that enables the supply of random, yet steady, wind velocity boundary conditions during run-time, are given in section 2. This enables the developed UQ workflow, whose results are presented on the basis of the Commonwealth Advisory Aeronautical Council (CAARC) as described in Deliverable 7.1. Finally, the completed UQ workflow and the results are evaluated from an application-driven wind engineering point of view. Thereby, the significance of the developed methods and the obtained results are discussed and their applicability in practical wind-engineering applications is tested through a complete test-run of the UQ workflow.


2016 ◽  
Vol 117 (5/6) ◽  
pp. 321-328 ◽  
Author(s):  
Gricel Dominguez

Purpose The purpose of this paper is to propose a method for the assessment of library space use and user experience by combining seating studies, surveys and observational data. Design/methodology/approach Seating usage studies (called seating sweeps), technology-assisted face-to-face surveys and observational data were used to assess library space usage and identify user behaviors. Findings Results from the study revealed higher library use than expected and provided insight into user behaviors and patterns. Practical implications The methods and study described aid in raising awareness of user experience within library spaces and provide valuable data for space redesign efforts. Originality/value The study builds upon methods described by Linn (2013) and combines traditional user experience methodologies to gain insight into library space use and user needs.


2019 ◽  
Vol 188 ◽  
pp. 106237 ◽  
Author(s):  
Zhaobin Li ◽  
Ganbo Deng ◽  
Patrick Queutey ◽  
Benjamin Bouscasse ◽  
Guillaume Ducrozet ◽  
...  

2014 ◽  
Vol 13 ◽  
pp. CIN.S20806 ◽  
Author(s):  
Kellie J. Archer ◽  
Jiayi Hou ◽  
Qing Zhou ◽  
Kyle Ferber ◽  
John G. Layne ◽  
...  

High-throughput genomic assays are performed using tissue samples with the goal of classifying the samples as normal < pre-malignant < malignant or by stage of cancer using a small set of molecular features. In such cases, molecular features monotonically associated with the ordinal response may be important to disease development; that is, an increase in the phenotypic level (stage of cancer) may be mechanistically linked through a monotonic association with gene expression or methylation levels. Though traditional ordinal response modeling methods exist, they assume independence among the predictor variables and require the number of samples ( n) to exceed the number of covariates ( P) included in the model. In this paper, we describe our ordinalgmifs R package, available from the Comprehensive R Archive Network, which can fit a variety of ordinal response models when the number of predictors ( P) exceeds the sample size ( n). R code illustrating usage is also provided.


1997 ◽  
Vol 29 (2) ◽  
pp. 429-443 ◽  
Author(s):  
Alfred Müller

We consider probability metrics of the following type: for a class of functions and probability measures P, Q we define A unified study of such integral probability metrics is given. We characterize the maximal class of functions that generates such a metric. Further, we show how some interesting properties of these probability metrics arise directly from conditions on the generating class of functions. The results are illustrated by several examples, including the Kolmogorov metric, the Dudley metric and the stop-loss metric.


Sign in / Sign up

Export Citation Format

Share Document