A BAYESIAN METANETWORK

2005 ◽  
Vol 14 (03) ◽  
pp. 371-384 ◽  
Author(s):  
VAGAN TERZIYAN

Bayesian network (BN) is known to be one of the most solid probabilistic modeling tools. The theory of BN provides already several useful modifications of a classical network. Among those there are context-enabled networks such as multilevel networks or recursive multinets, which can provide separate BN modelling for different combinations of contextual features' values. The main challenge of this paper is the multilevel probabilistic meta-model (Bayesian Metanetwork), which is an extension of traditional BN and modification of recursive multinets. It assumes that interoperability between component networks can be modeled by another BN. Bayesian Metanetwork is a set of BN, which are put on each other in such a way that conditional or unconditional probability distributions associated with nodes of every previous probabilistic network depend on probability distributions associated with nodes of the next network. We assume parameters (probability distributions) of a BN as random variables and allow conditional dependencies between these probabilities. Several cases of two-level Bayesian Metanetworks were presented, which consist on interrelated predictive and contextual BN models.

2011 ◽  
Vol 2011 ◽  
pp. 1-13 ◽  
Author(s):  
Linda Smail

Bayesian Networks are graphic probabilistic models through which we can acquire, capitalize on, and exploit knowledge. they are becoming an important tool for research and applications in artificial intelligence and many other fields in the last decade. This paper presents Bayesian networks and discusses the inference problem in such models. It proposes a statement of the problem and the proposed method to compute probability distributions. It also uses D-separation for simplifying the computation of probabilities in Bayesian networks. Given a Bayesian network over a family of random variables, this paper presents a result on the computation of the probability distribution of a subset of using separately a computation algorithm and D-separation properties. It also shows the uniqueness of the obtained result.


Author(s):  
RONALD R. YAGER

We look at the issue of obtaining a variance like measure associated with probability distributions over ordinal sets. We call these dissonance measures. We specify some general properties desired in these dissonance measures. The centrality of the cumulative distribution function in formulating the concept of dissonance is pointed out. We introduce some specific examples of measures of dissonance.


2021 ◽  
Author(s):  
Sophie Mentzel ◽  
Merete Grung ◽  
Knut Erik Tollefsen ◽  
Marianne Stenrod ◽  
Karina Petersen ◽  
...  

Conventional environmental risk assessment of chemicals is based on a calculated risk quotient, representing the ratio of exposure to effects of the chemical, in combination with assessment factors to account for uncertainty. Probabilistic risk assessment approaches can offer more transparency, by using probability distributions for exposure and/or effects to account for variability and uncertainty. In this study, a probabilistic approach using Bayesian network (BN) modelling is explored as an alternative to traditional risk calculation. BNs can serve as meta-models that link information from several sources and offer a transparent way of incorporating the required characterization of uncertainty for environmental risk assessment. To this end, a BN has been developed and parameterised for the pesticides azoxystrobin, metribuzin, and imidacloprid. We illustrate the development from deterministic (traditional) risk calculation, via intermediate versions, to fully probabilistic risk characterisation using azoxystrobin as an example. We also demonstrate seasonal risk calculation for the three pesticides.


Author(s):  
M. JULIA FLORES ◽  
JOSE A. GÁMEZ ◽  
KRISTIAN G. OLESEN

When a Bayesian network (BN) is modified, for example adding or deleting a node, or changing the probability distributions, we usually will need a total recompilation of the model, despite feeling that a partial (re)compilation could have been enough. Especially when considering dynamic models, in which variables are added and removed very frequently, these recompilations are quite resource consuming. But even further, for the task of building a model, which is in many occasions an iterative process, there is a clear lack of flexibility. When we use the term Incremental Compilation or IC we refer to the possibility of modifying a network and avoiding a complete recompilation to obtain the new (and different) join tree (JT). The main point we intend to study in this work is JT-based inference in Bayesian networks. Apart from undertaking the triangulation problem itself, we have achieved a great improvement for the compilation in BNs. We do not develop a new architecture for BNs inference, but taking some already existing framework JT-based for probability propagation such as Hugin or Shenoy and Shafer, we have designed a method that can be successfully applied to get better performance, as the experimental evaluation will show.


Author(s):  
M. Vidyasagar

This chapter provides an introduction to some elementary aspects of information theory, including entropy in its various forms. Entropy refers to the level of uncertainty associated with a random variable (or more precisely, the probability distribution of the random variable). When there are two or more random variables, it is worthwhile to study the conditional entropy of one random variable with respect to another. The last concept is relative entropy, also known as the Kullback–Leibler divergence, which measures the “disparity” between two probability distributions. The chapter first considers convex and concave functions before discussing the properties of the entropy function, conditional entropy, uniqueness of the entropy function, and the Kullback–Leibler divergence.


Sign in / Sign up

Export Citation Format

Share Document