bayesian confirmation theory
Recently Published Documents


TOTAL DOCUMENTS

46
(FIVE YEARS 12)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Andrea Giuseppe Ragno

Abstract Synchronic intertheoretic reductions are an important field of research in science. Arguably, the best model able to represent the main relations occurring in this kind of scientific reduction is the Nagelian account of reduction, a model further developed by Schaffner and nowadays known as the generalized version of the Nagel–Schaffner model (GNS). In their article (2010), Dizadji-Bahmani, Frigg, and Hartmann (DFH) specified the two main desiderata of a reduction á la GNS: confirmation and coherence. DFH first and, more rigorously, Tešic (2017) later analyse the confirmatory relation between the reducing and the reduced theory in terms of Bayesian confirmation theory. The purpose of this article is to analyse and compare the degree of coherence between the two theories involved in the GNS before and after the reduction. For this reason, in the first section, I will be looking at the reduction of thermodynamics to statistical mechanics and use it as an example to describe the GNS. In the second section, I will introduce three coherence measures which will then be employed in the comparison. Finally, in the last two sections, I will compare the degrees of coherence between the reducing and the reduced theory before and after the reduction and use a few numerical examples to understand the relation between coherence and confirmation measures.


2021 ◽  
Author(s):  
Mathias Sablé-Meyer ◽  
Janek Guerrini ◽  
Salvador Mascarenhas

We show that probabilistic decision-making behavior characteristic of reasoning by representativeness or typicality arises in minimalistic settings lacking many of the features previously thought to be necessary conditions for the phenomenon. Specifically, we develop a version of a classical experiment by Kahneman and Tversky (1973) on base-rate neglect, where participants have full access to the probabilistic distribution, conveyed entirely visually and without reliance on familiar stereotypes, rich descriptions, or individuating information. We argue that the notion of evidential support as studied in (Bayesian) confirmation theory offers a good account of our experimental findings, as has been proposed for related data points from the representativeness literature. In a nutshell, when faced with competing alternatives to choose from, humans are sometimes less interested in picking the option with the highest probability of being true (posterior probability), and instead choose the option best supported by available evidence. We point out that this theoretical avenue is descriptively powerful, but has an as-yet unclear explanatory dimension. Building on approaches to reasoning from linguistic semantics, we propose that the chief trigger of confirmation-theoretic mechanisms in deliberate reasoning is a linguistically-motivated tendency to interpret certain experimental setups as intrinsically contrastive, in a way best cashed out by modern linguistic semantic theories of questions. These questions generate pragmatic pressures for interpreting surrounding information as having been meant to help answer the question, which will naturally give rise to confirmation-theoretic effects, very plausibly as a byproduct of iterated Bayesian update as proposed by modern Bayesian theories of relevance-based reasoning in pragmatics. Our experiment provides preliminary but tantalizing evidence in favor of this hypothesis, as participants displayed significantly more confirmation-theoretic behavior in a condition that highlighted the question-like, contrastive nature of the task.


2021 ◽  
pp. 449-464
Author(s):  
Katya Tentori

This chapter briefly summarizes some the main results obtained from more than three decades of studies on the conjunction fallacy. It shows that this striking and widely discussed reasoning error is a robust phenomenon that can systematically affect the probabilistic inferences of both laypeople and experts, and it introduces an explanation based on the notion of evidential impact in terms of contemporary Bayesian confirmation theory. Finally, the chapter tackles the open issue of the greater accuracy and reliability of impact assessments over posterior probability judgments and outlines how further research on the role of evidential reasoning in the acceptability of explanations might contribute to the development of effective human-like computing.


Episteme ◽  
2021 ◽  
pp. 1-26
Author(s):  
Will Fleisher

Abstract Bayesian confirmation theory is our best formal framework for describing inductive reasoning. The problem of old evidence is a particularly difficult one for confirmation theory, because it suggests that this framework fails to account for central and important cases of inductive reasoning and scientific inference. I show that we can appeal to the fragmentation of doxastic states to solve this problem for confirmation theory. This fragmentation solution is independently well-motivated because of the success of fragmentation in solving other problems. I also argue that the fragmentation solution is preferable to other solutions to the problem of old evidence. These other solutions are already committed to something like fragmentation, but suffer from difficulties due to their additional commitments. If these arguments are successful, Bayesian confirmation theory is saved from the problem of old evidence, and the argument for fragmentation is bolstered by its ability to solve yet another problem.


Diametros ◽  
2020 ◽  
pp. 1-24
Author(s):  
Zoe Hitzig ◽  
Jacob Stegenga

We provide a novel articulation of the epistemic peril of p-hacking using three resources from philosophy: predictivism, Bayesian confirmation theory, and model selection theory. We defend a nuanced position on p-hacking: p-hacking is sometimes, but not always, epistemically pernicious. Our argument requires a novel understanding of Bayesianism, since a standard criticism of Bayesian confirmation theory is that it cannot represent the influence of biased methods. We then turn to pre-analysis plans, a methodological device used to mitigate p-hacking. Some say that pre-analysis plans are epistemically meritorious while others deny this, and in practice pre-analysis plans are often violated. We resolve this debate with a modest defence of pre-analysis plans. Further, we argue that pre-analysis plans can be epistemically relevant even if the plan is not strictly followed—and suggest that allowing for flexible pre-analysis plans may be the best available policy option.


2020 ◽  
Vol 1 (3) ◽  
pp. 1025-1040 ◽  
Author(s):  
Henry Small

In the 1970s, quantitative science studies were being pursued by sociologists, historians, and information scientists. Philosophers were part of this discussion, but their role would diminish as sociology of science asserted itself. An antiscience bias within the sociology of science became evident in the late 1970s, which split the science studies community, notably causing the “citationists” to go their own way. The main point of contention was whether science was a rational, evidence-based activity. To reverse the antiscience trend, it will be necessary to revive philosophical models of science, such as Bayesian confirmation theory or explanatory coherence models, where theory-experiment agreement plays a decisive role. A case study from the history of science is used to illustrate these models, and bibliometric and text-based methods are proposed as a source of data to test these models.


2020 ◽  
Author(s):  
Mathias Sablé-Meyer ◽  
Salvador Mascarenhas

We provide a new link between deductive and probabilistic reasoning fallacies. Illusory inferences from disjunction are a broad class of deductive fallacies traditionally explained by recourse to a matching procedure that looks for content overlap between premises. In two behavioral experiments, we show that this phenomenon is instead sensitive to real-world causal dependencies and not to exact content overlap. A group of participants rated the strength of the causal dependence between pairs of sentences. This measure is a near perfect predictor of fallacious reasoning by an independent group of participants in illusory inference tasks with the same materials. In light of these results, we argue that all extant accounts of these deductive fallacies require non-trivial adjustments. Crucially, these novel indirect illusory inferences from disjunction bear a structural similarity to seemingly unrelated probabilistic reasoning problems, in particular the conjunction fallacy from the heuristics and biases literature. This structural connection was entirely obscure in previous work on these deductive problems, due to the theoretical and empirical focus on content overlap. We argue that this structural parallelism provides arguments against the need for rich descriptions and individuating information in the conjunction fallacy, and we outline a unified theory of deductive illusory inferences from disjunction and the conjunction fallacy, in terms of Bayesian confirmation theory.


Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

In science, phenomena are often unexplained by the available scientific theories. At some point, it may be discovered that a novel theory accounts for this phenomenon—and this seems to confirm the theory because a persistent anomaly is resolved. However, Bayesian confirmation theory—primarily a theory for updating beliefs in the light of learning new information—struggles to describe confirmation by such cases of “old evidence”. We discuss the two main varieties of the Problem of Old Evidence (POE)—the static and the dynamic POE—, criticize existing solutions and develop two novel Bayesian models. They show how the discovery of explanatory and deductive relationships, or the absence of alternative explanations for the phenomenon in question, can confirm a theory. Finally, we assess the overall prospects of Bayesian Confirmation Theory in the light of the POE.


Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

The question “When is C a cause of E?” is well-studied in philosophy—much more than the equally important issue of quantifying the causal strength between C and E. In this chapter, we transfer methods from Bayesian Confirmation Theory to the problem of explicating causal strength. We develop axiomatic foundations for a probabilistic theory of causal strength as difference-making and proceed in three steps: First, we motivate causal Bayesian networks as an adequate framework for defining and comparing measures of causal strength. Second, we demonstrate how specific causal strength measures can be derived from a set of plausible adequacy conditions (method of representation theorems). Third, we use these results to argue for a specific measure of causal strength: the difference that interventions on the cause make for the probability of the effect. An application to outcome measures in medicine and discussion of possible objections concludes the chapter.


Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

Confirmation of scientific theories by empirical evidence is an important element of scientific reasoning and a central topic in philosophy of science. Bayesian Confirmation Theory—the analysis of confirmation in terms of degree of belief—is the most popular model of inductive reasoning. It comes in two varieties: confirmation as firmness (of belief), and confirmation as increase in firmness. We show why increase in firmness is a particularly fruitful explication of degree of confirmation, and how it resolves longstanding paradoxes of inductive inference (e.g., the paradox of the ravens, the tacking paradoxes and the grue paradox). Finally, we give an axiomatic characterization of various confirmation measures and we discuss the question of whether there is a single adequate measure of confirmation or whether a pluralist position is more promising


Sign in / Sign up

Export Citation Format

Share Document