MULTIVARIATE MODELS AND VARIABILITY INTERVALS: A LOCAL RANDOM SET APPROACH

Author(s):  
THOMAS FETZ

This article is devoted to the propagation of families of variability intervals through multivariate functions comprising the semantics of confidence limits. At fixed confidence level, local random sets are defined whose aggregation admits the calculation of upper probabilities of events. In the multivariate case, a number of ways of combination is highlighted to encompass independence and unknown interaction using random set independence and Fréchet bounds. For all cases we derive formulas for the corresponding upper probabilities and elaborate how they relate. An example from structural mechanics is used to exemplify the method.

Age determinations on a portion of the total crushed rock, and on the felspar fraction of each of four widely separated samples of the red granite from the Bushveld complex are reported. A single determination from the separated biotite of one sample was made. These nine determinations lead to a mean age of 2.41 x 10 9 years [ t 1/2 = 6.3 x 10 10 years] or 1.92 x 10 9 years [ t 1/2 = 5.0 x 10 10 years]. There are no variations between individual determinations that are significant at the 99% confidence level. For the unweighted mean age the 99% confidence limits are ± 0.13 x 10 9 years. Despite the low enrichment of 87 Sr the ‘total rock ’ method shows 99% confidence limits of ± 0.22 x 10 9 years for the mean of four determinations.


1996 ◽  
Vol 28 (02) ◽  
pp. 335-336
Author(s):  
Kiên Kiêu ◽  
Marianne Mora

Random measures are commonly used to describe geometrical properties of random sets. Examples are given by the counting measure associated with a point process, and the curvature measures associated with a random set with a smooth boundary. We consider a random measure with an invariant distribution under the action of a standard transformation group (translatioris, rigid motions, translations along a given direction and so on). In the framework of the theory of invariant measure decomposition, the reduced moments of the random measure are obtained by decomposing the related moment measures.


2000 ◽  
Vol 32 (01) ◽  
pp. 86-100 ◽  
Author(s):  
Wilfrid S. Kendall

We study the probability theory of countable dense random subsets of (uncountably infinite) Polish spaces. It is shown that if such a set is stationary with respect to a transitive (locally compact) group of symmetries then any event which concerns the random set itself (rather than accidental details of its construction) must have probability zero or one. Indeed the result requires only quasi-stationarity (null-events stay null under the group action). In passing, it is noted that the property of being countable does not correspond to a measurable subset of the space of subsets of an uncountably infinite Polish space.


Author(s):  
Vladimir I. Norkin ◽  
Roger J-B Wets

In the paper we study concentration of sample averages (Minkowski's sums) of independent bounded random sets and set valued mappings around their expectations. Sets and mappings are considered in a Hilbert space. Concentration is formulated in the form of exponential bounds on probabilities of normalized large deviations. In a sense, concentration phenomenon reflects the law of small numbers, describing non-asymptotic behavior of the sample averages. We sequentially consider concentration inequalities for bounded random variables, functions, vectors, sets and mappings, deriving next inequalities from preceding cases. Thus we derive concentration inequalities with explicit constants for random sets and mappings from the sharpest available (Talagrand type) inequalities for random functions and vectors. The most explicit inequalities are obtained in case of discrete distributions. The obtained results contribute to substantiation of the Monte Carlo method in infinite dimensional spaces.


2020 ◽  
Author(s):  
Cristina Lopes ◽  
João Velez

<p>For years, diatom-based biostratigraphy has been settings bio-events based on a qualitatively approach. This means that the biostratigraphy would set an age based on the findings or not of a certain species. However, how many species are needed to consider a certain datum as certain? One, ten, 100? Moreover, each biostratigrapher sets its on limits. One might consider one as enough and another 10. Therefore, the scale more often used is the absent, rare, frequent, common, dominant or abundant with an explanation of what of these definitions mean. This is very common in, for example, IODP expeditions.</p><p>However, what would happen to these biostratigraphy levels if one would apply, for example, a concept of 95% confidence level? Moreover, what would happen to an age model if this concept would be applied to all the biostratigraphy microfossil?</p><p>Here we will show Expedition 346 age model differences with and without confidence levels applied to diatoms. The differences can be significant and even considering the existence of a hiatus can be reconsider if confidence limits are applied, turning a possible hiatus into a very slow sedimentation rate having serious implications to the initial paleoceanographic interpretations.</p>


Author(s):  
D. Hobbs ◽  
A. P.-D. Ku

This paper outlines a method for calculating the number of inspection locations for process piping inspections. The method determines the number of piping inspection locations required for an inspection to detect a particular damage state within the confidence limits of the premised inspection’s reliability. It is intended to be used for piping inspections per API-570, “Piping Inspection Code” and in the application of risk-based inspection concepts presented in AP1-581, “Risk Based Inspection, Base Resource Document”. This method combines recognized inspection and piping engineering practices and random-field statistical tools to calculate the number of inspection locations in piping systems with probabilistic confidence level. This method has provisions for future applications when inspection data is known, or there is greater uncertainty in the distribution of the degradation or the reliability of the inspection data is different than those premised in this paper.


2010 ◽  
Vol 29-32 ◽  
pp. 1252-1257 ◽  
Author(s):  
Hui Xin Guo ◽  
Lei Chen ◽  
Hang Min He ◽  
Dai Yong Lin

A method to handle hybrid uncertainties including aleatory and epistemic uncertainty is proposed for the computation of reliability. The aleatory uncertainty is modeled as random variable and the epistemic uncertainty is modeled with evidence theory. The two types of uncertainty are firstly transformed into random set, and the limit-state function of a product is mapped into a random set by using the extension principle of random set. Then, the belief function and the plausibility function of safety event are determined. The two functions are viewed as the lower and the upper cumulative distribution functions of reliability, respectively. The reliability of a product will be bounded by two cumulative distribution functions, and then an interval estimation of reliability can be obtained. The proposed method is demonstrated with an example.


2014 ◽  
Vol 20 (1) ◽  
pp. 80-90 ◽  
Author(s):  
LAURENT BIENVENU ◽  
ADAM R. DAY ◽  
NOAM GREENBERG ◽  
ANTONÍN KUČERA ◽  
JOSEPH S. MILLER ◽  
...  

AbstractEveryK-trivial set is computable from an incomplete Martin-Löf random set, i.e., a Martin-Löf random set that does not compute the halting problem.


2010 ◽  
Vol 138 (11) ◽  
pp. 1674-1678 ◽  
Author(s):  
J. REICZIGEL ◽  
J. FÖLDI ◽  
L. ÓZSVÁRI

SUMMARYEstimation of prevalence of disease, including construction of confidence intervals, is essential in surveys for screening as well as in monitoring disease status. In most analyses of survey data it is implicitly assumed that the diagnostic test has a sensitivity and specificity of 100%. However, this assumption is invalid in most cases. Furthermore, asymptotic methods using the normal distribution as an approximation of the true sampling distribution may not preserve the desired nominal confidence level. Here we proposed exact two-sided confidence intervals for the prevalence of disease, taking into account sensitivity and specificity of the diagnostic test. We illustrated the advantage of the methods with results of an extensive simulation study and real-life examples.


Sign in / Sign up

Export Citation Format

Share Document