scholarly journals An Improved kNN Algorithm Based on Conditional Probability Distance Metric

Author(s):  
Ziyang Liu ◽  
Zhanbao Gao ◽  
Xulong Li
2020 ◽  
Vol 34 (04) ◽  
pp. 3834-3841
Author(s):  
Ujjal Kr Dutta ◽  
Mehrtash Harandi ◽  
C. Chandra Sekhar

Distance Metric Learning (DML) involves learning an embedding that brings similar examples closer while moving away dissimilar ones. Existing DML approaches make use of class labels to generate constraints for metric learning. In this paper, we address the less-studied problem of learning a metric in an unsupervised manner. We do not make use of class labels, but use unlabeled data to generate adversarial, synthetic constraints for learning a metric inducing embedding. Being a measure of uncertainty, we minimize the entropy of a conditional probability to learn the metric. Our stochastic formulation scales well to large datasets, and performs competitive to existing metric learning methods.


Author(s):  
Laura Mieth ◽  
Raoul Bell ◽  
Axel Buchner

Abstract. The present study serves to test how positive and negative appearance-based expectations affect cooperation and punishment. Participants played a prisoner’s dilemma game with partners who either cooperated or defected. Then they were given a costly punishment option: They could spend money to decrease the payoffs of their partners. Aggregated over trials, participants spent more money for punishing the defection of likable-looking and smiling partners compared to punishing the defection of unlikable-looking and nonsmiling partners, but only because participants were more likely to cooperate with likable-looking and smiling partners, which provided the participants with more opportunities for moralistic punishment. When expressed as a conditional probability, moralistic punishment did not differ as a function of the partners’ facial likability. Smiling had no effect on the probability of moralistic punishment, but punishment was milder for smiling in comparison to nonsmiling partners.


2002 ◽  
Vol 3 (1) ◽  
pp. 30-40
Author(s):  
Joseph D. Cautilli ◽  
Donald A. Hantula

Author(s):  
E. D. Avedyan ◽  
Le Thi Trang Linh

The article presents the analytical results of the decision-making by the majority voting algorithm (MVA). Particular attention is paid to the case of an even number of experts. The conditional probabilities of the MVA for two hypotheses are given for an even number of experts and their properties are investigated depending on the conditional probability of decision-making by independent experts of equal qualifications and on their number. An approach to calculating the probabilities of the correct solution of the MVA with unequal values of the conditional probabilities of accepting hypotheses of each statistically mutually independent expert is proposed. The findings are illustrated by numerical and graphical calculations.


2011 ◽  
Vol 36 (12) ◽  
pp. 1661-1673
Author(s):  
Jun GAO ◽  
Shi-Tong WANG ◽  
Xiao-Ming WANG

Author(s):  
Andrew Gelman ◽  
Deborah Nolan

This chapter contains many classroom activities and demonstrations to help students understand basic probability calculations, including conditional probability and Bayes rule. Many of the activities alert students to misconceptions about randomness. They create dramatic settings where the instructor discerns real coin flips from fake ones, students modify dice and coins in order to load them, students “accused” of lying based on the outcome of an inaccurate simulated lie detector face their classmates. Additionally, probability models of real outcomes offer good value: first we can do the probability calculations, and then can go back and discuss the potential flaws of the model.


Author(s):  
Timothy Williamson

The book argues that our use of conditionals is governed by imperfectly reliable heuristics, in the psychological sense of fast and frugal (or quick and dirty) ways of assessing them. The primary heuristic is this: to assess ‘If A, C’, suppose A and on that basis assess C; whatever attitude you take to C conditionally on A (such as acceptance, rejection, or something in between) take unconditionally to ‘If A, C’. This heuristic yields both the equation of the probability of ‘If A, C’ with the conditional probability of C on A and standard natural deduction rules for the conditional. However, these results can be shown to make the heuristic implicitly inconsistent, and so less than fully reliable. There is also a secondary heuristic: pass conditionals freely from one context to another under normal conditions for acceptance of sentences on the basis of memory and testimony. The effect of the secondary heuristic is to undermine interpretations on which ‘if’ introduces a special kind of context-sensitivity. On the interpretation which makes best sense of the two heuristics, ‘if’ is simply the truth-functional conditional. Apparent counterexamples to truth-functionality are artefacts of reliance on the primary heuristic in cases where it is unreliable. The second half of the book concerns counterfactual conditionals, as expressed with ‘if’ and ‘would’. It argues that ‘would’ is an independently meaningful modal operator for contextually restricted necessity: the meaning of counterfactuals is simply that derived compositionally from the meanings of their constituents, including ‘if’ and ‘would’, making them contextually restricted strict conditionals.


Sign in / Sign up

Export Citation Format

Share Document