Probability

Author(s):  
Andrew Gelman ◽  
Deborah Nolan

This chapter contains many classroom activities and demonstrations to help students understand basic probability calculations, including conditional probability and Bayes rule. Many of the activities alert students to misconceptions about randomness. They create dramatic settings where the instructor discerns real coin flips from fake ones, students modify dice and coins in order to load them, students “accused” of lying based on the outcome of an inaccurate simulated lie detector face their classmates. Additionally, probability models of real outcomes offer good value: first we can do the probability calculations, and then can go back and discuss the potential flaws of the model.

Author(s):  
M Pourmahdian ◽  
R Zoghifard

Abstract This paper provides some model-theoretic analysis for probability (modal) logic ($PL$). It is known that this logic does not enjoy the compactness property. However, by passing into the sublogic of $PL$, namely basic probability logic ($BPL$), it is shown that this logic satisfies the compactness property. Furthermore, by drawing some special attention to some essential model-theoretic properties of $PL$, a version of Lindström characterization theorem is investigated. In fact, it is verified that probability logic has the maximal expressive power among those abstract logics extending $PL$ and satisfying both the filtration and disjoint unions properties. Finally, by alternating the semantics to the finitely additive probability models ($\mathcal{F}\mathcal{P}\mathcal{M}$) and introducing positive sublogic of $PL$ including $BPL$, it is proved that this sublogic possesses the compactness property with respect to $\mathcal{F}\mathcal{P}\mathcal{M}$.


1992 ◽  
Vol 29 (04) ◽  
pp. 877-884 ◽  
Author(s):  
Noel Cressie ◽  
Subhash Lele

The Hammersley–Clifford theorem gives the form that the joint probability density (or mass) function of a Markov random field must take. Its exponent must be a sum of functions of variables, where each function in the summand involves only those variables whose sites form a clique. From a statistical modeling point of view, it is important to establish the converse result, namely, to give the conditional probability specifications that yield a Markov random field. Besag (1974) addressed this question by developing a one-parameter exponential family of conditional probability models. In this article, we develop new models for Markov random fields by establishing sufficient conditions for the conditional probability specifications to yield a Markov random field.


1999 ◽  
Vol 11 (5) ◽  
pp. 1155-1182 ◽  
Author(s):  
Matthew Brand

We introduce an entropic prior for multinomial parameter estimation problems and solve for its maximum a posteriori (MAP) estimator. The prior is a bias for maximally structured and minimally ambiguous models. In conditional probability models with hidden state, iterative MAP estimation drives weakly supported parameters toward extinction, effectively turning them off. Thus, structure discovery is folded into parameter estimation. We then establish criteria for simplifying a probabilistic model's graphical structure by trimming parameters and states, with a guarantee that any such deletion will increase the posterior probability of the model. Trimming accelerates learning by sparsifying the model. All operations monotonically and maximally increase the posterior probability, yielding structure-learning algorithms only slightly slower than parameter estimation via expectation-maximization and orders of magnitude faster than search-based structure induction. When applied to hidden Markov model training, the resulting models show superior generalization to held-out test data. In many cases the resulting models are so sparse and concise that they are interpretable, with hidden states that strongly correlate with meaningful categories.


2014 ◽  
Vol 7 (3) ◽  
pp. 415-438
Author(s):  
RONNIE HERMENS

AbstractIn this paper I defend the tenability of the Thesis that the probability of a conditional equals the conditional probability of the consequent given the antecedent. This is done by adopting the view that the interpretation of a conditional may differ from context to context. Several triviality results are (re-)evaluated in this view as providing natural constraints on probabilities for conditionals and admissible changes in the interpretation. The context-sensitive approach is also used to re-interpret some of the intuitive rules for conditionals and probabilities such as Bayes’ rule,Import-Export, and Modus Ponens. I will show that, contrary to consensus, the Thesis is in fact compatible with these re-interpreted rules.


Author(s):  
Fabian Mentzer ◽  
Eirikur Agustsson ◽  
Michael Tschannen ◽  
Radu Timofte ◽  
Luc Van Gool

1992 ◽  
Vol 29 (4) ◽  
pp. 877-884 ◽  
Author(s):  
Noel Cressie ◽  
Subhash Lele

The Hammersley–Clifford theorem gives the form that the joint probability density (or mass) function of a Markov random field must take. Its exponent must be a sum of functions of variables, where each function in the summand involves only those variables whose sites form a clique. From a statistical modeling point of view, it is important to establish the converse result, namely, to give the conditional probability specifications that yield a Markov random field. Besag (1974) addressed this question by developing a one-parameter exponential family of conditional probability models. In this article, we develop new models for Markov random fields by establishing sufficient conditions for the conditional probability specifications to yield a Markov random field.


Sign in / Sign up

Export Citation Format

Share Document