scholarly journals Covid conspiracies: misleading evidence can be more damaging than no evidence at all

2020 ◽  
pp. 1-2 ◽  
Author(s):  
Sally McManus ◽  
Joanna D'Ardenne ◽  
Simon Wessely
Keyword(s):  
1980 ◽  
Vol 37 (1) ◽  
pp. 81-89 ◽  
Author(s):  
Peter D. Klein
Keyword(s):  

BMJ ◽  
2009 ◽  
Vol 339 (sep15 2) ◽  
pp. b3801-b3801
Author(s):  
C. Dyer
Keyword(s):  

2020 ◽  
pp. 414-416
Author(s):  
Jody Azzouni

The hangman/surprise-examination/prediction paradox is solved. It is not solved by denying knowledge closure (although knowledge closure is false). It is not solved by denying KK or denying that knowing p implies other iterated knowing attitudes (although these are false). It is not solved by misleading evidence causing the students to lose knowledge because students cannot lose knowledge this way. It is solved by showing that a tacit assumption (what is being said to the students/prisoner is informative) is overlooked and that inferences by contradiction are invalid if assumptions are left out. The phenomenology of the surprise-exam paradox is explored to explain why this solution has been missed. Crucial is that in many cases the students/prisoner know(s) there will be a surprise exam/execution because of an inference from what the teacher/judge meant to say, and not directly by the literal application of what he did say.


2009 ◽  
Vol 68 (2) ◽  
pp. 260-265 ◽  
Author(s):  
S.V. Subramanian ◽  
Malavika A. Subramanyam ◽  
Sakthivel Selvaraj ◽  
Ichiro Kawachi

2017 ◽  
Author(s):  
Angelika Stefan ◽  
Quentin Frederik Gronau ◽  
Felix D. Schönbrodt ◽  
Eric-Jan Wagenmakers

Well-designed experiments are likely to yield compelling evidence with efficient sample sizes. Bayes Factor Design Analysis (BFDA) is a recently developed methodology that allows researchers to balance the informativeness and efficiency of their experiment (Schönbrodt & Wagenmakers, 2017). With BFDA, researchers can control the rate of misleading evidence but, in addition, they can plan for a target strength of evidence. BFDA can be applied to fixed-N and sequential designs. In this tutorial paper, we provide a tutorial-style introduction to BFDA and generalize the method to informed prior distributions. We also present a user-friendly web-based BFDA application that allows researchers to conduct BFDAs with ease. Two practical examples highlight how researchers can use a BFDA to plan for informative and efficient research designs.


2017 ◽  
Author(s):  
Marian Grendar ◽  
George G Judge

A measure of statistical evidence should permit the sample size determination so that the probability M of obtaining (strong) misleading evidence can be held as low as desired. On this desideratum the p-value fails completely, as it leads either to an arbitrary sample size if M >= 0.01 or no sample size at all, if M < 0.01. Unlike the p-value, the ratio of likelihoods, the ratio of posteriors, as well as the Bayes Factor, permit controlling the probability of misleading evidence by the sample size.


2016 ◽  
Author(s):  
Felix D. Schönbrodt ◽  
Eric-Jan Wagenmakers

A sizeable literature exists on the use of frequentist power analysis in the null-hypothesis significance testing (NHST) paradigm to facilitate the design of informative experiments. In contrast, there is almost no literature that discusses the design of experiments when Bayes factors (BFs) are used as a measure of evidence. Here we explore Bayes Factor Design Analysis (BFDA) as a useful tool to design studies for maximum efficiency and informativeness. We elaborate on three possible BF designs, (a) a fixed-n design, (b) an open-ended Sequential Bayes Factor (SBF) design, where researchers can test after each participant and can stop data collection whenever there is strong evidence for either H1 or H0, and (c) a modified SBF design that defines a maximal sample size where data collection is stopped regardless of the current state of evidence. We demonstrate how the properties of each design (i.e., expected strength of evi- dence, expected sample size, expected probability of misleading evidence, expected probability of weak evidence) can be evaluated using Monte Carlo simulations and equip researchers with the necessary information to compute their own Bayesian design analyses.


2020 ◽  
pp. 320-342
Author(s):  
Jody Azzouni
Keyword(s):  

Knowledge does not require confidence. An agent may know without confidence because of misleading evidence or for other reasons. An agent may not believe what she knows. Misleading evidence never causes agents to lose knowledge. The vagueness of an expression may be visible to speakers or invisible. In the case of “bald,” it is visible; it is not visible for “know.” This is because knowledge standards are invisible. Vagueness is analyzed as being epistemic in the sense that our ignorance of whether a word applies in a case places no metaphysical constraints on the facts. Agential standards for evidence are also tri-scoped and application-indeterminate. There are cases where such standards determine no answer, knows or not; and there are cases where it is indeterminate whether, or not, standards determine an answer. Because Timothy Williamson’s argument against KK presupposes that knowledge requires confidence, his argument fails.


2019 ◽  
pp. 105-123
Author(s):  
Sophie Horowitz

Evidence can be misleading: it can rationalize raising one’s confidence in false propositions, and lowering one’s confidence in the truth. But can a rational agent know that her total evidence supports a (particular) falsehood? It seems not: if we could see ahead of time that our evidence supported a false belief, then we could avoid believing what our evidence supported, and hence avoid being misled. So, it seems, evidence cannot be predictably misleading. This chapter develops a new problem for higher-order evidence: it is predictably misleading. It then examines a radical strategy for explaining higher-order evidence, according to which there are two distinct epistemic norms at work in the relevant cases. Finally, the chapter suggests that mainstream accounts of higher-order evidence may be able to answer the challenge after all. But to do so, they must deny that epistemic rationality requires believing what is likely given one’s evidence.


Sign in / Sign up

Export Citation Format

Share Document