A statistical model for deriving probability distributions of contamination for accidental releases

1986 ◽  
Vol 20 (6) ◽  
pp. 1249-1259 ◽  
Author(s):  
H.M. ApSimon ◽  
A.C. Davison
Entropy ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. 432
Author(s):  
Emmanuel Chevallier ◽  
Nicolas Guigui

This paper aims to describe a statistical model of wrapped densities for bi-invariant statistics on the group of rigid motions of a Euclidean space. Probability distributions on the group are constructed from distributions on tangent spaces and pushed to the group by the exponential map. We provide an expression of the Jacobian determinant of the exponential map of S E ( n ) which enables the obtaining of explicit expressions of the densities on the group. Besides having explicit expressions, the strengths of this statistical model are that densities are parametrized by their moments and are easy to sample from. Unfortunately, we are not able to provide convergence rates for density estimation. We provide instead a numerical comparison between the moment-matching estimators on S E ( 2 ) and R 3 , which shows similar behaviors.


2005 ◽  
Vol 18 (10) ◽  
pp. 1524-1540 ◽  
Author(s):  
Claudia Tebaldi ◽  
Richard L. Smith ◽  
Doug Nychka ◽  
Linda O. Mearns

Abstract A Bayesian statistical model is proposed that combines information from a multimodel ensemble of atmosphere–ocean general circulation models (AOGCMs) and observations to determine probability distributions of future temperature change on a regional scale. The posterior distributions derived from the statistical assumptions incorporate the criteria of bias and convergence in the relative weights implicitly assigned to the ensemble members. This approach can be considered an extension and elaboration of the reliability ensemble averaging method. For illustration, the authors consider the output of mean surface temperature from nine AOGCMs, run under the A2 emission scenario from the Synthesis Report on Emission Scenarios (SRES), for boreal winter and summer, aggregated over 22 land regions and into two 30-yr averages representative of current and future climate conditions. The shapes of the final probability density functions of temperature change vary widely, from unimodal curves for regions where model results agree (or outlying projections are discounted) to multimodal curves where models that cannot be discounted on the basis of bias give diverging projections. Besides the basic statistical model, the authors consider including correlation between present and future temperature responses, and test alternative forms of probability distributions for the model error terms. It is suggested that a probabilistic approach, particularly in the form of a Bayesian model, is a useful platform from which to synthesize the information from an ensemble of simulations. The probability distributions of temperature change reveal features such as multimodality and long tails that could not otherwise be easily discerned. Furthermore, the Bayesian model can serve as an interdisciplinary tool through which climate modelers, climatologists, and statisticians can work more closely. For example, climate modelers, through their expert judgment, could contribute to the formulations of prior distributions in the statistical model.


2019 ◽  
Author(s):  
Tom Griffiths ◽  
Kevin Canini ◽  
Adam N Sanborn ◽  
Danielle Navarro

Models of categorization make different representational assumptions, with categories being represented by prototypes, sets of exemplars, and everything in between. Rational models of categorization justify these representational assumptions in terms of different schemes for estimating probability distributions. However, they do not answer the question of which scheme should be used in representing a given category. We show that existing rational models of categorization are special cases of a statistical model called the hierarchical Dirichlet process, which can be used to automatically infer a representation of the appropriate complexity for a given category.


2016 ◽  
Vol 14 (4) ◽  
pp. e0209 ◽  
Author(s):  
Mirko Guerrieri ◽  
Marco Fedrizzi ◽  
Francesca Antonucci ◽  
Federico Pallottino ◽  
Giulio Sperandio ◽  
...  

The estimation of operating costs of agricultural and forestry machineries is a key factor in both planning agricultural policies and farm management. Few works have tried to estimate operating costs and the produced models are normally based on deterministic approaches. Conversely, in the statistical model randomness is present and variable states are not described by unique values, but rather by probability distributions. In this study, for the first time, a multivariate statistical model based on Partial Least Squares (PLS) was adopted to predict the fuel consumption and costs of six agricultural operations such as: ploughing, harrowing, fertilization, sowing, weed control and shredding. The prediction was conducted on two steps: first of all few initial selected parameters (time per surface-area unit, maximum engine power, purchase price of the tractor and purchase price of the operating machinery) were used to estimate the fuel consumption; then the predicted fuel consumption together with the initial parameters were used to estimate the operational costs. Since the obtained models were based on an input dataset very heterogeneous, these resulted to be extremely efficient and so generalizable and robust. In details the results show prediction values in the test with r always ≥ 0.91. Thus, the approach may results extremely useful for both farmers (in terms of economic advantages) and at institutional level (representing an innovative and efficient tool for planning future Rural Development Programmes and the Common Agricultural Policy). In light of these advantages the proposed approach may as well be implemented on a web platform and made available to all the stakeholders.


Entropy ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. 404 ◽  
Author(s):  
Julianna Pinele ◽  
João E. Strapasson ◽  
Sueli I. R. Costa

The Fisher–Rao distance is a measure of dissimilarity between probability distributions, which, under certain regularity conditions of the statistical model, is up to a scaling factor the unique Riemannian metric invariant under Markov morphisms. It is related to the Shannon entropy and has been used to enlarge the perspective of analysis in a wide variety of domains such as image processing, radar systems, and morphological classification. Here, we approach this metric considered in the statistical model of normal multivariate probability distributions, for which there is not an explicit expression in general, by gathering known results (closed forms for submanifolds and bounds) and derive expressions for the distance between distributions with the same covariance matrix and between distributions with mirrored covariance matrices. An application of the Fisher–Rao distance to the simplification of Gaussian mixtures using the hierarchical clustering algorithm is also presented.


Author(s):  
Marie Davidian

A statistical model is a class of probability distributions assumed to contain the true distribution generating the data. In parametric models, the distributions are indexed by a finite-dimensional parameter characterizing the scientific question of interest. Semiparametric models describe the distributions in terms of a finite-dimensional parameter and an infinite-dimensional component, offering more flexibility. Ordinarily, the statistical model represents distributions for the full data intended to be collected. When elements of these full data are missing, the goal is to make valid inference on the full-data-model parameter using the observed data. In a series of fundamental works, Robins, Rotnitzky, and colleagues derived the class of observed-data estimators under a semiparametric model assuming that the missingness mechanism is at random, which leads to practical, robust methodology for many familiar data-analytic challenges. This article reviews semiparametric theory and the key steps in this derivation. Expected final online publication date for the Annual Review of Statistics, Volume 9 is March 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


2019 ◽  
Vol 4 ◽  
Author(s):  
Nikolai W.F. Bode ◽  
Enrico Ronchi

Pedestrian dynamics is concerned with understanding the movement patterns that arise in places where more than one person walks. Relating theoretical models to data is a crucial goal of research in this field. Statistical model fitting and model selection are a suitable approach to this problem and here we review the concepts and literature related to this methodology in the context of pedestrian dynamics. The central tenet of statistical modelling is to describe the relationship between different variables by using probability distributions. Rather than providing a critique of existing methodology or a "how to" guide for such an established research technique, our review aims to highlight broad concepts, different uses, best practices, challenges and opportunities with a focussed view on theoretical models for pedestrian behaviour. This contribution is aimed at researchers in pedestrian dynamics who want to carefully analyse data, relate a theoretical model to data, or compare the relative quality of several theoretical models. The survey of the literature we present provides many methodological starting points and we suggest that the particular challenges to statistical modelling in pedestrian dynamics make this an inherently interesting field of research.


1997 ◽  
Vol 161 ◽  
pp. 197-201 ◽  
Author(s):  
Duncan Steel

AbstractWhilst lithopanspermia depends upon massive impacts occurring at a speed above some limit, the intact delivery of organic chemicals or other volatiles to a planet requires the impact speed to be below some other limit such that a significant fraction of that material escapes destruction. Thus the two opposite ends of the impact speed distributions are the regions of interest in the bioastronomical context, whereas much modelling work on impacts delivers, or makes use of, only the mean speed. Here the probability distributions of impact speeds upon Mars are calculated for (i) the orbital distribution of known asteroids; and (ii) the expected distribution of near-parabolic cometary orbits. It is found that cometary impacts are far more likely to eject rocks from Mars (over 99 percent of the cometary impacts are at speeds above 20 km/sec, but at most 5 percent of the asteroidal impacts); paradoxically, the objects impacting at speeds low enough to make organic/volatile survival possible (the asteroids) are those which are depleted in such species.


1978 ◽  
Vol 23 (11) ◽  
pp. 937-938
Author(s):  
JAMES R. KLUEGEL

Sign in / Sign up

Export Citation Format

Share Document