scholarly journals Uniqueness of the Level Two Bayesian Network Representing a Probability Distribution

2011 ◽  
Vol 2011 ◽  
pp. 1-13 ◽  
Author(s):  
Linda Smail

Bayesian Networks are graphic probabilistic models through which we can acquire, capitalize on, and exploit knowledge. they are becoming an important tool for research and applications in artificial intelligence and many other fields in the last decade. This paper presents Bayesian networks and discusses the inference problem in such models. It proposes a statement of the problem and the proposed method to compute probability distributions. It also uses D-separation for simplifying the computation of probabilities in Bayesian networks. Given a Bayesian network over a family of random variables, this paper presents a result on the computation of the probability distribution of a subset of using separately a computation algorithm and D-separation properties. It also shows the uniqueness of the obtained result.

Author(s):  
Richard Neapolitan ◽  
Xia Jiang

Bayesian networks are now among the leading architectures for reasoning with uncertainty in artificial intelligence. This chapter concerns their story, namely what they are, how and why they came into being, how we obtain them, and what they actually represent. First, it is shown that a standard application of Bayes’ Theorem constitutes inference in a two-node Bayesian network. Then more complex Bayesian networks are presented. Next the genesis of Bayesian networks and their relationship to causality is presented. A technique for learning Bayesian networks from data follows. Finally, a discussion of the philosophy of the probability distribution represented by a Bayesian network is provided.


Author(s):  
M. JULIA FLORES ◽  
JOSE A. GÁMEZ ◽  
KRISTIAN G. OLESEN

When a Bayesian network (BN) is modified, for example adding or deleting a node, or changing the probability distributions, we usually will need a total recompilation of the model, despite feeling that a partial (re)compilation could have been enough. Especially when considering dynamic models, in which variables are added and removed very frequently, these recompilations are quite resource consuming. But even further, for the task of building a model, which is in many occasions an iterative process, there is a clear lack of flexibility. When we use the term Incremental Compilation or IC we refer to the possibility of modifying a network and avoiding a complete recompilation to obtain the new (and different) join tree (JT). The main point we intend to study in this work is JT-based inference in Bayesian networks. Apart from undertaking the triangulation problem itself, we have achieved a great improvement for the compilation in BNs. We do not develop a new architecture for BNs inference, but taking some already existing framework JT-based for probability propagation such as Hugin or Shenoy and Shafer, we have designed a method that can be successfully applied to get better performance, as the experimental evaluation will show.


Author(s):  
M. Vidyasagar

This chapter provides an introduction to some elementary aspects of information theory, including entropy in its various forms. Entropy refers to the level of uncertainty associated with a random variable (or more precisely, the probability distribution of the random variable). When there are two or more random variables, it is worthwhile to study the conditional entropy of one random variable with respect to another. The last concept is relative entropy, also known as the Kullback–Leibler divergence, which measures the “disparity” between two probability distributions. The chapter first considers convex and concave functions before discussing the properties of the entropy function, conditional entropy, uniqueness of the entropy function, and the Kullback–Leibler divergence.


Author(s):  
Yoichi Motomura ◽  

Bayesian networks are probabilistic models that can be used for prediction and decision-making in the presence of uncertainty. For intelligent information processing, probabilistic reasoning based on Bayesian networks can be used to cope with uncertainty in real-world domains. In order to apply this, we need appropriate models and statistical learning methods to obtain models. We start by reviewing Bayesian network models, probabilistic reasoning, statistical learning, and related researches. Then, we introduce applications for intelligent information processing using Bayesian networks.


2011 ◽  
Vol 52 ◽  
pp. 353-358
Author(s):  
Algimantas Bikelis ◽  
Juozas Augutis ◽  
Kazimieras Padvelskis

We consider the formal asymptotic expansion of probability distribution of the sums of independent random variables. The approximation was made by using infinitely divisible probability distributions.  


Author(s):  
Marco F. Ramoni ◽  
Paola Sebastiani

Born at the intersection of artificial intelligence, statistics, and probability, Bayesian networks (Pearl, 1988) are a representation formalism at the cutting edge of knowledge discovery and data mining (Heckerman, 1997). Bayesian networks belong to a more general class of models called probabilistic graphical models (Whittaker, 1990; Lauritzen, 1996) that arise from the combination of graph theory and probability theory, and their success rests on their ability to handle complex probabilistic models by decomposing them into smaller, amenable components. A probabilistic graphical model is defined by a graph, where nodes represent stochastic variables and arcs represent dependencies among such variables. These arcs are annotated by probability distribution shaping the interaction between the linked variables. A probabilistic graphical model is called a Bayesian network, when the graph connecting its variables is a directed acyclic graph (DAG). This graph represents conditional independence assumptions that are used to factorize the joint probability distribution of the network variables, thus making the process of learning from a large database amenable to computations. A Bayesian network induced from data can be used to investigate distant relationships between variables, as well as making prediction and explanation, by computing the conditional probability distribution of one variable, given the values of some others.


2005 ◽  
Vol 14 (03) ◽  
pp. 371-384 ◽  
Author(s):  
VAGAN TERZIYAN

Bayesian network (BN) is known to be one of the most solid probabilistic modeling tools. The theory of BN provides already several useful modifications of a classical network. Among those there are context-enabled networks such as multilevel networks or recursive multinets, which can provide separate BN modelling for different combinations of contextual features' values. The main challenge of this paper is the multilevel probabilistic meta-model (Bayesian Metanetwork), which is an extension of traditional BN and modification of recursive multinets. It assumes that interoperability between component networks can be modeled by another BN. Bayesian Metanetwork is a set of BN, which are put on each other in such a way that conditional or unconditional probability distributions associated with nodes of every previous probabilistic network depend on probability distributions associated with nodes of the next network. We assume parameters (probability distributions) of a BN as random variables and allow conditional dependencies between these probabilities. Several cases of two-level Bayesian Metanetworks were presented, which consist on interrelated predictive and contextual BN models.


Information ◽  
2019 ◽  
Vol 10 (10) ◽  
pp. 294 ◽  
Author(s):  
Xingping Sun ◽  
Chang Chen ◽  
Lu Wang ◽  
Hongwei Kang ◽  
Yong Shen ◽  
...  

Since the beginning of the 21st century, research on artificial intelligence has made great progress. Bayesian networks have gradually become one of the hotspots and important achievements in artificial intelligence research. Establishing an effective Bayesian network structure is the foundation and core of the learning and application of Bayesian networks. In Bayesian network structure learning, the traditional method of utilizing expert knowledge to construct the network structure is gradually replaced by the data learning structure method. However, as a result of the large amount of possible network structures, the search space is too large. The method of Bayesian network learning through training data usually has the problems of low precision or high complexity, which make the structure of learning differ greatly from that of reality, which has a great influence on the reasoning and practical application of Bayesian networks. In order to solve this problem, a hybrid optimization artificial bee colony algorithm is discretized and applied to structure learning. A hybrid optimization technique for the Bayesian network structure learning method is proposed. Experimental simulation results show that the proposed hybrid optimization structure learning algorithm has better structure and better convergence.


Author(s):  
Tahrima Rahman ◽  
Shasha Jin ◽  
Vibhav Gogate

Recently there has been growing interest in learning probabilistic models that admit poly-time inference called tractable probabilistic models from data. Although they generalize poorly as compared to intractable models, they often yield more accurate estimates at prediction time. In this paper, we seek to further explore this trade-off between generalization performance and inference accuracy by proposing a novel, partially tractable representation called cutset Bayesian networks (CBNs). The main idea in CBNs is to partition the variables into two subsets X and Y, learn a (intractable) Bayesian network that represents P(X) and a tractable conditional model that represents P(Y|X). The hope is that the intractable model will help improve generalization while the tractable model, by leveraging Rao-Blackwellised sampling which combines exact inference and sampling, will help improve the prediction accuracy. To compactly model P(Y|X), we introduce a novel tractable representation called conditional cutset networks (CCNs) in which all conditional probability distributions are represented using calibrated classifiers—classifiers which typically yield higher quality probability estimates than conventional classifiers. We show via a rigorous experimental evaluation that CBNs and CCNs yield more accurate posterior estimates than their tractable as well as intractable counterparts.


Author(s):  
Munteanu Bogdan Gheorghe

Based on the Weibull-G Power probability distribution family, we have proposed a new family of probability distributions, named by us the Max Weibull-G power series distributions, which may be applied in order to solve some reliability problems. This implies the fact that the Max Weibull-G power series is the distribution of a random variable max (X1 ,X2 ,...XN) where X1 ,X2 ,... are Weibull-G distributed independent random variables and N is a natural random variable the distribution of which belongs to the family of power series distribution. The main characteristics and properties of this distribution are analyzed.


Sign in / Sign up

Export Citation Format

Share Document