An Evaluation of Validation Metrics for Probabilistic Model Outputs

Author(s):  
Paul Gardner ◽  
Charles Lord ◽  
Robert J. Barthorpe

Probabilistic modelling methods are increasingly being employed in engineering applications. These approaches make inferences about the distribution, or summary statistical moments, for output quantities. A challenge in applying probabilistic models is validating output distributions. An ideal validation metric is one that intuitively provides information on key divergences between the output and validation distributions. Furthermore, it should be interpretable across different problems in order to informatively select the appropriate statistical method. In this paper, two families of measures for quantifying differences between distributions are compared: f-divergence and integral probability metrics (IPMs). Discussions and evaluation of these measures as validation metrics are performed with comments on ease of computation, interpretability and quantity of information provided.

Author(s):  
Paul Gardner ◽  
Charles Lord ◽  
Robert J. Barthorpe

Abstract Probabilistic modeling methods are increasingly being employed in engineering applications. These approaches make inferences about the distribution for output quantities of interest. A challenge in applying probabilistic computer models (simulators) is validating output distributions against samples from observational data. An ideal validation metric is one that intuitively provides information on key differences between the simulator output and observational distributions, such as statistical distances/divergences. Within the literature, only a small set of statistical distances/divergences have been utilized for this task; often selected based on user experience and without reference to the wider variety available. As a result, this paper offers a unifying framework of statistical distances/divergences, categorizing those implemented within the literature, providing a greater understanding of their benefits, and offering new potential measures as validation metrics. In this paper, two families of measures for quantifying differences between distributions, that encompass the existing statistical distances/divergences within the literature, are analyzed: f-divergence and integral probability metrics (IPMs). Specific measures from these families are highlighted, providing an assessment of current and new validation metrics, with a discussion of their merits in determining simulator adequacy, offering validation metrics with greater sensitivity in quantifying differences across the range of probability mass.


Author(s):  
Philippe Cambos ◽  
Guy Parmentier

During ship life, operating conditions may change, tanker may be converted into FPSO, and flag requirements may be modified. Generally these modifications have few impacts on existing structures; flag requirements only rarely are to be applied retroactively. Nevertheless in some cases modifications of operating condition may induce considerable consequences, making in the worst cases impossible any reengineering. For example converting a common tanker, built with plain steel of grade A into an Offshore Floating Unit able operating in cold region, may require a grade change corresponding to a grade B. It is obviously meaningless to replace all material just because material certificates. Steels used by shipyards have to fulfill Classification society’s requirements dealing with mechanical strength; generally shipbuilding corresponds to a small part of steelmaker’s production. For this reason steelmakers are reluctant to produce steels with mechanical properties corresponding exactly to the minima required. They generally deliver steels already in stock, with higher mechanical characteristics than required. In this case it can be taken advantage of this common practice. In order to demonstrate that the material fulfill the requirements of grade B it has been decided to adopt a statistic approach. At this stage there are two main issues, the first one is that it is needed to provide evidences that the actual material Charpy V characteristics fulfill the requirements of grade B; the second one is to provide these evidences with a minimum testing. To assess this assumption a random check has been carried out. Different probabilistic model have been tested in order to check common approaches and probabilistic model based on physical considerations. In the paper the main assumptions for estimating the minimum Charpy value main assumption in the probabilistic models are recalled, the behavior of empirical sample is examined, the parameters of probability laws fitting the empirical distribution and definitely as accuracy of probability law parameters determination is not perfect with a finite number of specimens the uncertainty in the determination of parameters is taken into account with confidence limits. According to the selected probabilistic model the minimum value corresponds to an acceptable probability of failure, taking into account the target confidence level, or is independent of any acceptable probability of failure and is defined with the same confidence level. At the end it is concluded that a random check with a data treatment assuming a random distribution of Charpy V test results distributed according to a Weibull probability law of the minimum allows providing evidences that with a sufficient confidence level the steel used for the considered structure fulfill the requirements of the new operating conditions.


Author(s):  
Ievgen Redko ◽  
Amaury Habrard ◽  
Emilie Morvant ◽  
Marc Sebban ◽  
Younès Bennani

2007 ◽  
Vol 97 (3) ◽  
pp. 2083-2093 ◽  
Author(s):  
Paul W. German ◽  
Howard L. Fields

Animals return to rewarded locations. An example of this is conditioned place preference (CPP), which is widely used in studies of drug reward. Although CPP is expressed as increased time spent in a previously rewarded location, the behavioral strategy underlying this change is unknown. We continuously monitored rats ( n = 22) in a three-room in-line configuration, before and after morphine conditioning in one end room. Although sequential room visit durations were variable, their probability distribution was exponential, indicating that the processes controlling visit durations can be modeled by instantaneous room exit probabilities. Further analysis of room transitions and computer simulations of probabilistic models revealed that the exploratory bias toward the morphine room is best explained by an increase in the probability of a subset of rapid, direct transitions from the saline- to the morphine-paired room by the central room. This finding sharply delineates and constrains possible neural mechanisms for a class of self-initiated, goal-directed behaviors toward previously rewarded locations.


2020 ◽  
Vol 305 ◽  
pp. 00012
Author(s):  
Vasyl Holinko ◽  
Yevhen Ustymenko ◽  
Volodymyr Bondarenko ◽  
Iryna Kovalevska

Purpose is development of tools for assessing the emergency hazard of facility for disposal of explosive conversion products and materials. Results have been obtained by means of the economic and mathematical modelling methods, with use of the fundamental provisions of probability theory. The main task of conversion products and materials disposal is the creation and organization of safe technological processes aimed at returning the material resources contained in the conversion products to the state’s economy after the appropriate processing, rather than at destroying these resources. As a methodological basis for assessing the emergency hazard of a facility, a provision has been accepted that an integral measure of the disposal process hazard is the economic assessment of accidents. A scheme has been constructed of successiveness of the disposal facility events and states that can lead to an accident, on the basis of which a probabilistic model of its occurrence is developed. Proposed probabilistic model of an accident initiation at the facility for disposal of explosive conversion products and materials makes it possible to predict the behaviour of facility, based on results of the equipment and service personnel state survey, as well as on the analysis of the environmental conditions.


2018 ◽  
Vol 20 (5) ◽  
pp. 1100-1110 ◽  
Author(s):  
Szeląg Bartosz ◽  
Adam Kiczko ◽  
Jan Studziński ◽  
Lidia Dąbek

Abstract The study compares an annual number of weir overflows calculated using a hydrodynamic model by continuous simulations and a probabilistic model. The weir outflow for a single precipitation event was successfully modelled using logistic regression. Performed numerical experiments showed that the calculated number of weir outflows with the hydrodynamic model falls within confidence intervals of the probabilistic model. This suggests that the model of the logistic regression can be used in practice. The probabilistic simulations revealed that a model with a probabilistic description of a number of annual precipitations and a model with an assumed average number of such events are not consistent. The proposed methodology can be applied for the design of outflow weirs and other storm devices.


Author(s):  
Bharath K. Sriperumbudur ◽  
Kenji Fukumizu ◽  
Arthur Gretton ◽  
Bernhard Scholkopf ◽  
Gert R. G. Lanckriet

Author(s):  
Niraja Jain, Dr B Raghu, Dr V Khanaa

Dynamic cloud infrastructure provisioning is possible with the virtualization technology. Cost, agility and time to market are the key elements of the cloud services. Virtualization is the software layer responsible for interaction with multiple servers, bringing entire IT resources together and provide standardized Virtual compute centers that drives the entire infrastructure. The increased pooling of shared resources helps in improving self-provisioning and automation of service delivery. Probabilistic model proposed in this article is based on the hypothesis that the accurate resource demand predictions can benefit in improving the virtualization layer efficiency. The probabilistic method, uses the laws of combinatorics. The probability space gives an idea about both the partial certainty and randomness of the variable. The method is popular in theoretical computer science. The probabilistic models provide the predictions considering the randomness of the variables. In the cloud environment there are multiple factors dynamically affecting the resource demand needs. The resource demand has a certain degree of certainty but the randomness of requirements. This further leads to decrease in risk related to leveraging cloud services. It accelerates development and implementation of cloud services that overall improves the services pertaining to SLA.


10.14311/1055 ◽  
2008 ◽  
Vol 48 (5) ◽  
Author(s):  
M. Svítek

This paper presents the theory of wave probabilistic models, together with important features, such as the inclusion-exclusion rule, the product rule, the complementary principle and entanglement. These features are mathematically described, and an illustrative example of binary time series is shown to demonstrate possible applications of the theory. 


Sign in / Sign up

Export Citation Format

Share Document