jeffrey conditionalization
Recently Published Documents


TOTAL DOCUMENTS

17
(FIVE YEARS 5)

H-INDEX

5
(FIVE YEARS 1)

Synthese ◽  
2021 ◽  
Author(s):  
Patryk Dziurosz-Serafinowicz ◽  
Dominika Dziurosz-Serafinowicz

AbstractWe explore the question of whether cost-free uncertain evidence is worth waiting for in advance of making a decision. A classical result in Bayesian decision theory, known as the value of evidence theorem, says that, under certain conditions, when you update your credences by conditionalizing on some cost-free and certain evidence, the subjective expected utility of obtaining this evidence is never less than the subjective expected utility of not obtaining it. We extend this result to a type of update method, a variant of Judea Pearl’s virtual conditionalization, where uncertain evidence is represented as a set of likelihood ratios. Moreover, we argue that focusing on this method rather than on the widely accepted Jeffrey conditionalization enables us to show that, under a fairly plausible assumption, gathering uncertain evidence not only maximizes expected pragmatic utility, but also minimizes expected epistemic disutility (inaccuracy).


2021 ◽  
Vol 130 (1) ◽  
pp. 1-43
Author(s):  
John Hawthorne ◽  
Maria Lasonen-Aarnio

The main aims in this article are to discuss and criticize the core thesis of a position that has become known as phenomenal conservatism. According to this thesis, its seeming to one that p provides enough justification for a belief in p to be prima facie justified (a thesis the article labels Standard Phenomenal Conservatism). This thesis captures the special kind of epistemic import that seemings are claimed to have. To get clearer on this thesis, the article embeds it, first, in a probabilistic framework in which updating on new evidence happens by Bayesian conditionalization, and second, a framework in which updating happens by Jeffrey conditionalization. The article spells out problems for both views, and then generalizes some of these to nonprobabilistic frameworks. The main theme of the discussion is that the epistemic import of a seeming (or experience) should depend on its content in a plethora of ways that phenomenal conservatism is insensitive to.


2019 ◽  
Vol 50 (2) ◽  
pp. 174-194
Author(s):  
Christian J. Feldbacher-Escamilla ◽  
Alexander Gebharter

AbstractCertain hypotheses cannot be directly confirmed for theoretical, practical, or moral reasons. For some of these hypotheses, however, there might be a workaround: confirmation based on analogical reasoning. In this paper we take up Dardashti, Hartmann, Thébault, and Winsberg’s (2019) idea of analyzing confirmation based on analogical inference Bayesian style. We identify three types of confirmation by analogy and show that Dardashti et al.’s approach can cover two of them. We then highlight possible problems with their model as a general approach to analogical inference and argue that these problems can be avoided by supplementing Bayesian update with Jeffrey conditionalization.


2019 ◽  
Vol 50 (2) ◽  
pp. 159-173 ◽  
Author(s):  
Lisa Cassell

AbstractLange (2000) famously argues that although Jeffrey Conditionalization is non-commutative over evidence, it’s not defective in virtue of this feature. Since reversing the order of the evidence in a sequence of updates that don’t commute does not reverse the order of the experiences that underwrite these revisions, the conditions required to generate commutativity failure at the level of experience will fail to hold in cases where we get commutativity failure at the level of evidence. If our interest in commutativity is, fundamentally, an interest in the order-invariance of information, an updating sequence that does not violate such a principle at the more fundamental level of experiential information should not be deemed defective. This paper claims that Lange’s argument fails as a general defense of the Jeffrey framework. Lange’s argument entails that the inputs to the Jeffrey framework differ from those of classical Bayesian Conditionalization in a way that makes them defective. Therefore, either the Jeffrey framework is defective in virtue of not commuting its inputs, or else it is defective in virtue of commuting the wrong kinds of ones.


2019 ◽  
Vol 177 (10) ◽  
pp. 2985-3012
Author(s):  
Borut Trpin

2018 ◽  
Vol 12 (3-4) ◽  
pp. 351-374 ◽  
Author(s):  
Zalán Gyenis

2017 ◽  
Vol 10 (4) ◽  
pp. 719-755 ◽  
Author(s):  
ZALÁN GYENIS ◽  
MIKLÓS RÉDEI

AbstractWe investigate the general properties of general Bayesian learning, where “general Bayesian learning” means inferring a state from another that is regarded as evidence, and where the inference is conditionalizing the evidence using the conditional expectation determined by a reference probability measure representing the background subjective degrees of belief of a Bayesian Agent performing the inference. States are linear functionals that encode probability measures by assigning expectation values to random variables via integrating them with respect to the probability measure. If a state can be learned from another this way, then it is said to be Bayes accessible from the evidence. It is shown that the Bayes accessibility relation is reflexive, antisymmetric, and nontransitive. If every state is Bayes accessible from some other defined on the same set of random variables, then the set of states is called weakly Bayes connected. It is shown that the set of states is not weakly Bayes connected if the probability space is standard. The set of states is called weakly Bayes connectable if, given any state, the probability space can be extended in such a way that the given state becomes Bayes accessible from some other state in the extended space. It is shown that probability spaces are weakly Bayes connectable. Since conditioning using the theory of conditional expectations includes both Bayes’ rule and Jeffrey conditionalization as special cases, the results presented generalize substantially some results obtained earlier for Jeffrey conditionalization.


2015 ◽  
Vol 45 (5-6) ◽  
pp. 767-797 ◽  
Author(s):  
Christopher J. G. Meacham

At the heart of the Bayesianism is a rule, Conditionalization, which tells us how to update our beliefs. Typical formulations of this rule are underspecified. This paper considers how, exactly, this rule should be formulated. It focuses on three issues: when a subject’s evidence is received, whether the rule prescribes sequential or interval updates, and whether the rule is narrow or wide scope. After examining these issues, it argues that there are two distinct and equally viable versions of Conditionalization to choose from. And which version we choose has interesting ramifications, bearing on issues such as whether Conditionalization can handle continuous evidence, and whether Jeffrey Conditionalization is really a generalization of Conditionalization.


Sign in / Sign up

Export Citation Format

Share Document