bayesian conditionalization
Recently Published Documents


TOTAL DOCUMENTS

11
(FIVE YEARS 2)

H-INDEX

3
(FIVE YEARS 0)

Erkenntnis ◽  
2021 ◽  
Author(s):  
Richard Pettigrew

AbstractRescorla (Erkenntnis, 2020) has recently pointed out that the standard arguments for Bayesian Conditionalization assume that whenever I become certain of something, it is true. Most people would reject this assumption. In response, Rescorla offers an improved Dutch Book argument for Bayesian Conditionalization that does not make this assumption. My purpose in this paper is two-fold. First, I want to illuminate Rescorla’s new argument by giving a very general Dutch Book argument that applies to many cases of updating beyond those covered by Conditionalization, and then showing how Rescorla’s version follows as a special case of that. Second, I want to show how to generalise R. A. Briggs and Richard Pettigrew’s Accuracy Dominance argument to avoid the assumption that Rescorla has identified (Briggs and Pettigrew in Noûs, 2018). In both cases, these arguments proceed by first establishing a very general reflection principle.



2021 ◽  
Vol 130 (1) ◽  
pp. 1-43
Author(s):  
John Hawthorne ◽  
Maria Lasonen-Aarnio

The main aims in this article are to discuss and criticize the core thesis of a position that has become known as phenomenal conservatism. According to this thesis, its seeming to one that p provides enough justification for a belief in p to be prima facie justified (a thesis the article labels Standard Phenomenal Conservatism). This thesis captures the special kind of epistemic import that seemings are claimed to have. To get clearer on this thesis, the article embeds it, first, in a probabilistic framework in which updating on new evidence happens by Bayesian conditionalization, and second, a framework in which updating happens by Jeffrey conditionalization. The article spells out problems for both views, and then generalizes some of these to nonprobabilistic frameworks. The main theme of the discussion is that the epistemic import of a seeming (or experience) should depend on its content in a plethora of ways that phenomenal conservatism is insensitive to.



2019 ◽  
Vol 50 (2) ◽  
pp. 159-173 ◽  
Author(s):  
Lisa Cassell

AbstractLange (2000) famously argues that although Jeffrey Conditionalization is non-commutative over evidence, it’s not defective in virtue of this feature. Since reversing the order of the evidence in a sequence of updates that don’t commute does not reverse the order of the experiences that underwrite these revisions, the conditions required to generate commutativity failure at the level of experience will fail to hold in cases where we get commutativity failure at the level of evidence. If our interest in commutativity is, fundamentally, an interest in the order-invariance of information, an updating sequence that does not violate such a principle at the more fundamental level of experiential information should not be deemed defective. This paper claims that Lange’s argument fails as a general defense of the Jeffrey framework. Lange’s argument entails that the inputs to the Jeffrey framework differ from those of classical Bayesian Conditionalization in a way that makes them defective. Therefore, either the Jeffrey framework is defective in virtue of not commuting its inputs, or else it is defective in virtue of commuting the wrong kinds of ones.



Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

This chapter sets the stage for what follows, introducing the reader to the philosophical principles and the mathematical formalism behind Bayesian inference and its scientific applications. We explain and motivate the representation of graded epistemic attitudes (“degrees of belief”) by means of specific mathematical structures: probabilities. Then we show how these attitudes are supposed to change upon learning new evidence (“Bayesian Conditionalization”), and how all this relates to theory evaluation, action and decision-making. After sketching the different varieties of Bayesian inference, we present Causal Bayesian Networks as an intuitive graphical tool for making Bayesian inference and we give an overview over the contents of the book.



Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

Learning indicative conditionals and learning relative frequencies have one thing in common: they are examples of conditional evidence, that is, evidence that includes a suppositional element. Standard Bayesian theory does not describe how such evidence affects rational degrees of belief, and natural solutions run into major problems. We propose that conditional evidence is best modeled by a combination of two strategies: First, by generalizing Bayesian Conditionalization to minimizing an appropriate divergence between prior and posterior probability distribution. Second, by representing the relevant causal relations and the implied conditional independence relations in a Bayesian network that constrains both prior and posterior. We show that this approach solves several well-known puzzles about learning conditional evidence (e.g., the notorious Judy Benjamin problem) and that learning an indicative conditional can often be described adequately by conditionalizing on the associated material conditional.



Episteme ◽  
2018 ◽  
Vol 17 (1) ◽  
pp. 64-72 ◽  
Author(s):  
Ittay Nissan-Rozen

ABSTRACTI present a puzzle about the epistemic role beliefs about experts' beliefs play in a rational agent's system of beliefs. It is shown that accepting the claim that an expert's degree of belief in a proposition, A, screens off the evidential support another proposition, B, gives to A in case the expert knows and is certain about whether B is true, leads in some cases to highly unintuitive conclusions. I suggest a solution to the puzzle according to which evidential screening off is rejected, but show that the price of this solution is either giving up on the mere idea of deferring to expert's opinion or giving up on Bayesian conditionalization.



Author(s):  
Michael Titelbaum

An agent's self-locating credences capture her opinions about who she is, where she is, and what time it is. Most authors agree that self-locating credences cannot be rationally updated simply by applying traditional Bayesian conditionalization. After explaining why this is, I catalog alternative updating schemes that have been proposed for self-locating credence. I separate those schemes into three broad approaches: ‘shifting schemes’, ‘stable base schemes’, and ‘demonstrative schemes’. Each approach solves particular problems but has its particular blindspots. I then suggest that the Sleeping Beauty Problem has generated so much controversy in the literature because it falls into the blindspots of all three types of updating schemes.



2006 ◽  
pp. 243-255
Author(s):  
Richard Otte


1999 ◽  
Vol 96 (6) ◽  
pp. 294-324 ◽  
Author(s):  
Marc Lange ◽  


1994 ◽  
Vol 45 (2) ◽  
pp. 451-466 ◽  
Author(s):  
COLIN HOWSON ◽  
ALLAN FRANKLIN


Sign in / Sign up

Export Citation Format

Share Document