scholarly journals Domain Adaptation of Conditional Probability Models Via Feature Subsetting

Author(s):  
Sandeepkumar Satpal ◽  
Sunita Sarawagi
Author(s):  
Andrew Gelman ◽  
Deborah Nolan

This chapter contains many classroom activities and demonstrations to help students understand basic probability calculations, including conditional probability and Bayes rule. Many of the activities alert students to misconceptions about randomness. They create dramatic settings where the instructor discerns real coin flips from fake ones, students modify dice and coins in order to load them, students “accused” of lying based on the outcome of an inaccurate simulated lie detector face their classmates. Additionally, probability models of real outcomes offer good value: first we can do the probability calculations, and then can go back and discuss the potential flaws of the model.


1992 ◽  
Vol 29 (04) ◽  
pp. 877-884 ◽  
Author(s):  
Noel Cressie ◽  
Subhash Lele

The Hammersley–Clifford theorem gives the form that the joint probability density (or mass) function of a Markov random field must take. Its exponent must be a sum of functions of variables, where each function in the summand involves only those variables whose sites form a clique. From a statistical modeling point of view, it is important to establish the converse result, namely, to give the conditional probability specifications that yield a Markov random field. Besag (1974) addressed this question by developing a one-parameter exponential family of conditional probability models. In this article, we develop new models for Markov random fields by establishing sufficient conditions for the conditional probability specifications to yield a Markov random field.


1999 ◽  
Vol 11 (5) ◽  
pp. 1155-1182 ◽  
Author(s):  
Matthew Brand

We introduce an entropic prior for multinomial parameter estimation problems and solve for its maximum a posteriori (MAP) estimator. The prior is a bias for maximally structured and minimally ambiguous models. In conditional probability models with hidden state, iterative MAP estimation drives weakly supported parameters toward extinction, effectively turning them off. Thus, structure discovery is folded into parameter estimation. We then establish criteria for simplifying a probabilistic model's graphical structure by trimming parameters and states, with a guarantee that any such deletion will increase the posterior probability of the model. Trimming accelerates learning by sparsifying the model. All operations monotonically and maximally increase the posterior probability, yielding structure-learning algorithms only slightly slower than parameter estimation via expectation-maximization and orders of magnitude faster than search-based structure induction. When applied to hidden Markov model training, the resulting models show superior generalization to held-out test data. In many cases the resulting models are so sparse and concise that they are interpretable, with hidden states that strongly correlate with meaningful categories.


Author(s):  
Fabian Mentzer ◽  
Eirikur Agustsson ◽  
Michael Tschannen ◽  
Radu Timofte ◽  
Luc Van Gool

1992 ◽  
Vol 29 (4) ◽  
pp. 877-884 ◽  
Author(s):  
Noel Cressie ◽  
Subhash Lele

The Hammersley–Clifford theorem gives the form that the joint probability density (or mass) function of a Markov random field must take. Its exponent must be a sum of functions of variables, where each function in the summand involves only those variables whose sites form a clique. From a statistical modeling point of view, it is important to establish the converse result, namely, to give the conditional probability specifications that yield a Markov random field. Besag (1974) addressed this question by developing a one-parameter exponential family of conditional probability models. In this article, we develop new models for Markov random fields by establishing sufficient conditions for the conditional probability specifications to yield a Markov random field.


Stroke ◽  
2018 ◽  
Vol 49 (Suppl_1) ◽  
Author(s):  
Jessalyn K Holodinsky ◽  
Tyler S Williamson ◽  
Andrew M Demchuk ◽  
Henry Zhao ◽  
Alan Coreas ◽  
...  

2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Xiao-huai Chen ◽  
Yin-bao Cheng ◽  
Han-bin Wang ◽  
Hong-li Li ◽  
Zhen-ying Cheng ◽  
...  

It is important to research into the misjudgment probability of product inspection based on measurement uncertainty, which is of great significance to improve the reliability of inspection results. This paper mainly focused on total inspection and sampling inspection methods and regarded the misjudgment probability as the index to provide quantitative misjudgment risk results for both producer and consumer sides. Through the absolute probability and the conditional probability model, the estimation formula of the total inspection misjudgment rate is deduced, respectively, and the calculation methods of qualification determination and misjudgment rate of the full inspection results are studied. According to the total inspection misjudgment rate, the methods of misjudgment rate of sampling inspection and qualification determination of measurement results are researched. The misjudgment rate of measurement results is calculated based on the exhaustive method and the Monte-Carlo simulation. The estimation results show that the misjudgment probabilities calculated by absolute probability models can be used as the basis for the selection of the measurement plan for product inspection. The misjudgment probability calculated by conditional probability models is more directly to reflect the risks for both producer and consumer sides, and it prompts inspectors to make decisions more carefully.


Sign in / Sign up

Export Citation Format

Share Document