scholarly journals Statistical theory of probabilistic hazard maps: a probability distribution for the hazard boundary location

2018 ◽  
Author(s):  
David M. Hyman ◽  
Andrea Bevilacqua ◽  
Marcus I. Bursik

Abstract. The study of volcanic mass flow hazards in a probabilistic framework centers around systematic experimental numerical modelling of the hazardous phenomenon and the subsequent generation and interpretation of a probabilistic hazard map (PHM). For a given volcanic flow (e.g., lava flow, lahar, pyroclastic flow, etc.), the PHM is typically interpreted as the point-wise probability of flow material inundation. In the current work, we present new methods for calculating spatial representations of the mean, standard deviation, median, and modal locations of the hazard's boundary as ensembles of many deterministic runs of a physical model. By formalizing its generation and properties, we show that a PHM may be used to construct these statistical measures of the hazard boundary which have been unrecognized in previous probabilistic hazard analyses. Our formalism shows that a typical PHM for a volcanic mass flow not only gives the point-wise inundation probability, but also represents a set of cumulative distribution functions for the location of the inundation boundary with a corresponding set of probability density functions. These distributions run over curves of steepest ascent on the PHM. Consequently, 2D space curves can be constructed on the map which represent the mean, median and modal locations of the likely inundation boundary. These curves give well-defined answers to the question of the likely boundary location of the area impacted by the hazard. Additionally, methods of calculation for higher moments including the standard deviation are presented which take the form of map regions surrounding the mean boundary location. These measures of central tendency and variance add significant value to spatial probabilistic hazard analyses, giving a new statistical description of the probability distributions underlying PHMs. The theory presented here may be used to construct improved hazard maps, which could prove useful for planning and emergency management purposes. This formalism also allows for application to simplified processes describable by analytic solutions. In that context, the connection between the PHM, its moments, and the underlying parameter variation is explicit, allowing for better source parameter estimation from natural data, yielding insights about natural controls on those parameters.

2019 ◽  
Vol 19 (7) ◽  
pp. 1347-1363 ◽  
Author(s):  
David M. Hyman ◽  
Andrea Bevilacqua ◽  
Marcus I. Bursik

Abstract. The study of volcanic flow hazards in a probabilistic framework centers around systematic experimental numerical modeling of the hazardous phenomenon and the subsequent generation and interpretation of a probabilistic hazard map (PHM). For a given volcanic flow (e.g., lava flow, lahar, pyroclastic flow, ash cloud), the PHM is typically interpreted as the point-wise probability of inundation by flow material. In the current work, we present new methods for calculating spatial representations of the mean, standard deviation, median, and modal locations of the hazard's boundary as ensembles of many deterministic runs of a physical model. By formalizing its generation and properties, we show that a PHM may be used to construct these statistical measures of the hazard boundary which have been unrecognized in previous probabilistic hazard analyses. Our formalism shows that a typical PHM for a volcanic flow not only gives the point-wise inundation probability, but also represents a set of cumulative distribution functions for the location of the inundation boundary with a corresponding set of probability density functions. These distributions run over curves of steepest probability gradient ascent on the PHM. Consequently, 2-D space curves can be constructed on the map which represents the mean, median, and modal locations of the likely inundation boundary. These curves give well-defined answers to the question of the likely boundary location of the area impacted by the hazard. Additionally, methods of calculation for higher moments including the standard deviation are presented, which take the form of map regions surrounding the mean boundary location. These measures of central tendency and variance add significant value to spatial probabilistic hazard analyses, giving a new statistical description of the probability distributions underlying PHMs. The theory presented here may be used to aid construction of improved hazard maps, which could prove useful for planning and emergency management purposes. This formalism also allows for application to simplified processes describable by analytic solutions. In that context, the connection between the PHM, its moments, and the underlying parameter variation is explicit, allowing for better source parameter estimation from natural data, yielding insights about natural controls on those parameters.


2018 ◽  
Vol 146 (12) ◽  
pp. 4079-4098 ◽  
Author(s):  
Thomas M. Hamill ◽  
Michael Scheuerer

Abstract Hamill et al. described a multimodel ensemble precipitation postprocessing algorithm that is used operationally by the U.S. National Weather Service (NWS). This article describes further changes that produce improved, reliable, and skillful probabilistic quantitative precipitation forecasts (PQPFs) for single or multimodel prediction systems. For multimodel systems, final probabilities are produced through the linear combination of PQPFs from the constituent models. The new methodology is applied to each prediction system. Prior to adjustment of the forecasts, parametric cumulative distribution functions (CDFs) of model and analyzed climatologies are generated using the previous 60 days’ forecasts and analyses and supplemental locations. The CDFs, which can be stored with minimal disk space, are then used for quantile mapping to correct state-dependent bias for each member. In this stage, the ensemble is also enlarged using a stencil of forecast values from the 5 × 5 surrounding grid points. Different weights and dressing distributions are assigned to the sorted, quantile-mapped members, with generally larger weights for outlying members and broader dressing distributions for members with heavier precipitation. Probability distributions are generated from the weighted sum of the dressing distributions. The NWS Global Ensemble Forecast System (GEFS), the Canadian Meteorological Centre (CMC) global ensemble, and the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble forecast data are postprocessed for April–June 2016. Single prediction system postprocessed forecasts are generally reliable and skillful. Multimodel PQPFs are roughly as skillful as the ECMWF system alone. Postprocessed guidance was generally more skillful than guidance using the Gamma distribution approach of Scheuerer and Hamill, with coefficients generated from data pooled across the United States.


Author(s):  
Q. Liu ◽  
L. S Chiu ◽  
X. Hao

The abundance or lack of rainfall affects peoples’ life and activities. As a major component of the global hydrological cycle (Chokngamwong & Chiu, 2007), accurate representations at various spatial and temporal scales are crucial for a lot of decision making processes. Climate models show a warmer and wetter climate due to increases of Greenhouse Gases (GHG). However, the models’ resolutions are often too coarse to be directly applicable to local scales that are useful for mitigation purposes. Hence disaggregation (downscaling) procedures are needed to transfer the coarse scale products to higher spatial and temporal resolutions. The aim of this paper is to examine the changes in the statistical parameters of rainfall at various spatial and temporal resolutions. The TRMM Multi-satellite Precipitation Analysis (TMPA) at 0.25 degree, 3 hourly grid rainfall data for a summer is aggregated to 0.5,1.0, 2.0 and 2.5 degree and at 6, 12, 24 hourly, pentad (five days) and monthly resolutions. The probability distributions (PDF) and cumulative distribution functions(CDF) of rain amount at these resolutions are computed and modeled as a mixed distribution. Parameters of the PDFs are compared using the Kolmogrov-Smironov (KS) test, both for the mixed and the marginal distribution. These distributions are shown to be distinct. The marginal distributions are fitted with Lognormal and Gamma distributions and it is found that the Gamma distributions fit much better than the Lognormal.


2018 ◽  
Vol 70 (2) ◽  
pp. 136-154
Author(s):  
Suchandan Kayal ◽  
S. M. Sunoj ◽  
B. Vineshkumar

There are several statistical models which have explicit quantile functions, but do not have manageable cumulative distribution functions. For example, Govindarajulu, various forms of lambda, and power-Pareto distributions. Thus, to study the reliability measures for such kind of distributions, a quantile-based tool is essentially required. In this article, we consider quantile version of some well- known reliability measures in the reversed time scale. We study stochastic orders based on the reversed hazard quantile function and the mean inactivity quantile time function. Further, we discuss relative reversed hazard quantile function order, likelihood quantile ratio order, and elasticity quantile order. Connections between the newly proposed orders and the existing stochastic orders are established. AMS 2010 Subject Classification: 60E15, 62E10


2014 ◽  
Vol 29 (2) ◽  
Author(s):  
Amrutha Buddana ◽  
Tomasz J. Kozubowski

AbstractWe review several common discretization schemes and study a particular class of power-tail probability distributions on integers, obtained by discretizing continuous Pareto II (Lomax) distribution through one of them. Our results include expressions for the density and cumulative distribution functions, probability generating function, moments and related parameters, stability and divisibility properties, stochastic representations, and limiting distributions of random sums with discrete-Pareto number of terms. We also briefly discuss issues of simulation and estimation and extensions to multivariate setting.


1988 ◽  
Vol 64 (1) ◽  
pp. 299-307 ◽  
Author(s):  
E. H. Oldmixon ◽  
J. P. Butler ◽  
F. G. Hoppin

To determine the dihedral angle, alpha, at the characteristic three-way septal junctions of lung parenchyma, we examined photomicrographs of sections. The three angles, A, formed where three septal traces meet on section, were measured and found to range between approximately 50 and 170 degrees. Theoretical considerations predicted that the dispersion of alpha is much narrower than that of A. The mean of A and alpha is identically 120 degrees. The standard deviation of alpha was inferred from the cumulative distribution function of A. In lungs inflated to 30 cmH2O (VL30), the standard deviation of alpha was very small (approximately 2 degrees) and increased to approximately 6 degrees in lungs inflated to 0.4 VL30. These findings imply that at VL30 tensions exerted by septa are locally homogeneous (2% variation) and at lower lung volumes become less so (6% variation). At high distending pressures, tissue forces are thought to dominate interfacial forces, and therefore the local uniformity of tensions suggests a stress-responsive mechanism for forming or remodeling the connective tissues. The source of the local nonuniformity at lower volumes is unclear but could relate to differences in mechanical properties of alveolar duct and alveoli. Finally, local uniformity does not imply global uniformity.


1964 ◽  
Vol 3 (2) ◽  
pp. 144-152 ◽  
Author(s):  
H. Bühlmann

In practical applications of the collective theory of risk one is very often confronted with the problem of making some kind of assumptions about the form of the distribution functions underlying the frequency as well as the severity of claims. Lundberg's [6] and Cramér's [3] approach are essentially based upon the hypothesis that the number of claims occurring in a certain period obey the Poisson distribution whereas for the conditional distribution of the amount claimed upon occurrence of such a claim the exponential distribution is very often used. Of course, by weighting the Poisson distributions (as e.g. done by Ammeter [1]) one enlarges the class of “frequency of claims” distributions considerably but nevertheless there remains an uneasy feeling about artificial assumptions, which are just made for mathematical convenience but are not necessarily related to the practical problems to which the theory of risk is applied.It seems to me that, before applying the general model of the theory of risk, one should always ask the question: “How much information do we want from the mathematical model which describes the risk process?” The answer will be that in many practical cases it is sufficient to determine the mean and the variance of this process. Let me only mention the rate making, the experience control, the refund problems and the detection of secular trends in a certain risk category. In all these cases the practical solutions seem to be sufficiently determined by mean and variance.Let us therefore attack the problem of determining mean and variance of the risk process while trying to make as few assumptions as possible about the type of the underlying probability distributions. This approach is not original. De Finetti [5] has already proposed an approach to risk theory only based upon the knowledge of mean and variance. It is along his lines of thought, although in different mathematical form, that I wish to proceed.


2021 ◽  
Author(s):  
Wenting Wang ◽  
Shuiqing Yin ◽  
Bofu Yu ◽  
Shaodong Wang

Abstract. Stochastic weather generator CLIGEN can simulate long-term weather sequences as input to WEPP for erosion predictions. Its use, however, has been somewhat restricted by limited observations at high spatial-temporal resolutions. Long-term daily temperature, daily and hourly precipitation data from 2405 stations and daily solar radiation from 130 stations distributed across mainland China were collected to develop the most critical set of site-specific parameter values for CLIGEN. Universal Kriging (UK) with auxiliary covariables, longitude, latitude, elevation, and the mean annual rainfall was used to interpolate parameter values into a 10 km × 10 km grid and parameter accuracy was evaluated based on leave-one-out cross-validation. The results demonstrated that Nash-Sutcliffe efficiency coefficients (NSEs) between UK interpolated and observed parameters were greater than 0.85 for all parameters apart from the standard deviation of solar radiation, skewness coefficient of daily precipitation, and cumulative distribution of relative time to peak intensity, with relatively lower interpolation accuracy (NSE > 0.66). In addition, CLIGEN simulated daily weather sequences using UK-interpolated and observed inputs showed consistent statistics and frequency distributions. The mean absolute discrepancy between the two sequences in the average and standard deviation of the temperature was less than 0.51 °C. The mean absolute relative discrepancy for the same statistics for solar radiation, precipitation amount, duration and maximum intensity in 30-min were less than 5 %. CLIGEN parameters at the 10 km resolution would meet the minimum WEPP climate requirements throughout in mainland China. The dataset is availability at http://clicia.bnu.edu.cn/data/cligen.html and http://doi.org/10.12275/bnu.clicia.CLIGEN.CN.gridinput.001 (Wang et al., 2020).


2019 ◽  
Vol 3 ◽  
pp. 1-10
Author(s):  
Jyotirmoy Sarkar ◽  
Mamunur Rashid

Background: Sarkar and Rashid (2016a) introduced a geometric way to visualize the mean based on either the empirical cumulative distribution function of raw data, or the cumulative histogram of tabular data. Objective: Here, we extend the geometric method to visualize measures of spread such as the mean deviation, the root mean squared deviation and the standard deviation of similar data. Materials and Methods: We utilized elementary high school geometric method and the graph of a quadratic transformation. Results: We obtain concrete depictions of various measures of spread. Conclusion: We anticipate such visualizations will help readers understand, distinguish and remember these concepts.


2012 ◽  
Vol 9 (8) ◽  
pp. 2889-2904 ◽  
Author(s):  
I. G. Enting ◽  
P. J. Rayner ◽  
P. Ciais

Abstract. Characterisation of estimates of regional carbon budgets and processes is inherently a statistical task. In full form this means that almost all quantities used or produced are realizations or instances of probability distributions. We usually compress the description of these distributions by using some kind of location parameter (e.g. the mean) and some measure of spread or uncertainty (e.g. the standard deviation). Characterising and calculating these uncertainties, and their structure in space and time, is as important as the location parameter, but uncertainties are both hard to calculate and hard to interpret. In this paper we describe the various classes of uncertainty that arise in a process like RECCAP and describe how they interact in formal estimation procedures. We also point out the impact these uncertainties will have on the various RECCAP synthesis activities.


Sign in / Sign up

Export Citation Format

Share Document