Quasi-Bayesian Analysis Using Imprecise Probability Assessments And The Generalized Bayes’ Rule

2005 ◽  
Vol 58 (2) ◽  
pp. 209-238 ◽  
Author(s):  
Kathleen M. Whitcomb
1999 ◽  
Vol 26 (2) ◽  
pp. 265-279 ◽  
Author(s):  
Jose Manuel Corcuera ◽  
Federica Giummole
Keyword(s):  

Author(s):  
Yan Wang

Variability is inherent randomness in systems, whereas uncertainty is due to lack of knowledge. In this paper, a generalized multiscale Markov (GMM) model is proposed to quantify variability and uncertainty simultaneously in multiscale system analysis. The GMM model is based on a new imprecise probability theory that has the form of generalized interval, which is a Kaucher or modal extension of classical set-based intervals to represent uncertainties. The properties of the new definitions of independence and Bayesian inference are studied. Based on a new Bayes’ rule with generalized intervals, three cross-scale validation approaches that incorporate variability and uncertainty propagation are also developed.


2015 ◽  
Vol 15 (6) ◽  
pp. 274-283 ◽  
Author(s):  
Ignacio Lira ◽  
Dieter Grientschnig

Abstract Let a quantity of interest, Y, be modeled in terms of a quantity X and a set of other quantities Z. Suppose that for Z there is type B information, by which we mean that it leads directly to a joint state-of-knowledge probability density function (PDF) for that set, without reference to likelihoods. Suppose also that for X there is type A information, which signifies that a likelihood is available. The posterior for X is then obtained by updating its prior with said likelihood by means of Bayes’ rule, where the prior encodes whatever type B information there may be available for X. If there is no such information, an appropriate non-informative prior should be used. Once the PDFs for X and Z have been constructed, they can be propagated through the measurement model to obtain the PDF for Y, either analytically or numerically. But suppose that, at the same time, there is also information of type A, type B or both types together for the quantity Y. By processing such information in the manner described above we obtain another PDF for Y. Which one is right? Should both PDFs be merged somehow? Is there another way of applying Bayes’ rule such that a single PDF for Y is obtained that encodes all existing information? In this paper we examine what we believe should be the proper ways of dealing with such a (not uncommon) situation.


Sign in / Sign up

Export Citation Format

Share Document