bayesian confirmation
Recently Published Documents


TOTAL DOCUMENTS

81
(FIVE YEARS 15)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Andrea Giuseppe Ragno

Abstract Synchronic intertheoretic reductions are an important field of research in science. Arguably, the best model able to represent the main relations occurring in this kind of scientific reduction is the Nagelian account of reduction, a model further developed by Schaffner and nowadays known as the generalized version of the Nagel–Schaffner model (GNS). In their article (2010), Dizadji-Bahmani, Frigg, and Hartmann (DFH) specified the two main desiderata of a reduction á la GNS: confirmation and coherence. DFH first and, more rigorously, Tešic (2017) later analyse the confirmatory relation between the reducing and the reduced theory in terms of Bayesian confirmation theory. The purpose of this article is to analyse and compare the degree of coherence between the two theories involved in the GNS before and after the reduction. For this reason, in the first section, I will be looking at the reduction of thermodynamics to statistical mechanics and use it as an example to describe the GNS. In the second section, I will introduce three coherence measures which will then be employed in the comparison. Finally, in the last two sections, I will compare the degrees of coherence between the reducing and the reduced theory before and after the reduction and use a few numerical examples to understand the relation between coherence and confirmation measures.


2021 ◽  
pp. 449-464
Author(s):  
Katya Tentori

This chapter briefly summarizes some the main results obtained from more than three decades of studies on the conjunction fallacy. It shows that this striking and widely discussed reasoning error is a robust phenomenon that can systematically affect the probabilistic inferences of both laypeople and experts, and it introduces an explanation based on the notion of evidential impact in terms of contemporary Bayesian confirmation theory. Finally, the chapter tackles the open issue of the greater accuracy and reliability of impact assessments over posterior probability judgments and outlines how further research on the role of evidential reasoning in the acceptability of explanations might contribute to the development of effective human-like computing.


PLoS ONE ◽  
2021 ◽  
Vol 16 (3) ◽  
pp. e0248261
Author(s):  
Daniella Vos ◽  
Richard Stafford ◽  
Emma L. Jenkins ◽  
Andrew Garrard

The interpretation of archaeological features often requires a combined methodological approach in order to make the most of the material record, particularly from sites where this may be limited. In practice, this requires the consultation of different sources of information in order to cross validate findings and combat issues of ambiguity and equifinality. However, the application of a multiproxy approach often generates incompatible data, and might therefore still provide ambiguous results. This paper explores the potential of a simple digital framework to increase the explanatory power of multiproxy data by enabling the incorporation of incompatible, ambiguous datasets in a single model. In order to achieve this, Bayesian confirmation was used in combination with decision trees. The results of phytolith and geochemical analyses carried out on soil samples from ephemeral sites in Jordan are used here as a case study. The combination of the two datasets as part of a single model enabled us to refine the initial interpretation of the use of space at the archaeological sites by providing an alternative identification for certain activity areas. The potential applications of this model are much broader, as it can also help researchers in other domains reach an integrated interpretation of analysis results by combining different datasets.


Episteme ◽  
2021 ◽  
pp. 1-26
Author(s):  
Will Fleisher

Abstract Bayesian confirmation theory is our best formal framework for describing inductive reasoning. The problem of old evidence is a particularly difficult one for confirmation theory, because it suggests that this framework fails to account for central and important cases of inductive reasoning and scientific inference. I show that we can appeal to the fragmentation of doxastic states to solve this problem for confirmation theory. This fragmentation solution is independently well-motivated because of the success of fragmentation in solving other problems. I also argue that the fragmentation solution is preferable to other solutions to the problem of old evidence. These other solutions are already committed to something like fragmentation, but suffer from difficulties due to their additional commitments. If these arguments are successful, Bayesian confirmation theory is saved from the problem of old evidence, and the argument for fragmentation is bolstered by its ability to solve yet another problem.


Diametros ◽  
2020 ◽  
pp. 1-24
Author(s):  
Zoe Hitzig ◽  
Jacob Stegenga

We provide a novel articulation of the epistemic peril of p-hacking using three resources from philosophy: predictivism, Bayesian confirmation theory, and model selection theory. We defend a nuanced position on p-hacking: p-hacking is sometimes, but not always, epistemically pernicious. Our argument requires a novel understanding of Bayesianism, since a standard criticism of Bayesian confirmation theory is that it cannot represent the influence of biased methods. We then turn to pre-analysis plans, a methodological device used to mitigate p-hacking. Some say that pre-analysis plans are epistemically meritorious while others deny this, and in practice pre-analysis plans are often violated. We resolve this debate with a modest defence of pre-analysis plans. Further, we argue that pre-analysis plans can be epistemically relevant even if the plan is not strictly followed—and suggest that allowing for flexible pre-analysis plans may be the best available policy option.


2020 ◽  
Vol 1 (3) ◽  
pp. 1025-1040 ◽  
Author(s):  
Henry Small

In the 1970s, quantitative science studies were being pursued by sociologists, historians, and information scientists. Philosophers were part of this discussion, but their role would diminish as sociology of science asserted itself. An antiscience bias within the sociology of science became evident in the late 1970s, which split the science studies community, notably causing the “citationists” to go their own way. The main point of contention was whether science was a rational, evidence-based activity. To reverse the antiscience trend, it will be necessary to revive philosophical models of science, such as Bayesian confirmation theory or explanatory coherence models, where theory-experiment agreement plays a decisive role. A case study from the history of science is used to illustrate these models, and bibliometric and text-based methods are proposed as a source of data to test these models.


Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

In science, phenomena are often unexplained by the available scientific theories. At some point, it may be discovered that a novel theory accounts for this phenomenon—and this seems to confirm the theory because a persistent anomaly is resolved. However, Bayesian confirmation theory—primarily a theory for updating beliefs in the light of learning new information—struggles to describe confirmation by such cases of “old evidence”. We discuss the two main varieties of the Problem of Old Evidence (POE)—the static and the dynamic POE—, criticize existing solutions and develop two novel Bayesian models. They show how the discovery of explanatory and deductive relationships, or the absence of alternative explanations for the phenomenon in question, can confirm a theory. Finally, we assess the overall prospects of Bayesian Confirmation Theory in the light of the POE.


Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

The question “When is C a cause of E?” is well-studied in philosophy—much more than the equally important issue of quantifying the causal strength between C and E. In this chapter, we transfer methods from Bayesian Confirmation Theory to the problem of explicating causal strength. We develop axiomatic foundations for a probabilistic theory of causal strength as difference-making and proceed in three steps: First, we motivate causal Bayesian networks as an adequate framework for defining and comparing measures of causal strength. Second, we demonstrate how specific causal strength measures can be derived from a set of plausible adequacy conditions (method of representation theorems). Third, we use these results to argue for a specific measure of causal strength: the difference that interventions on the cause make for the probability of the effect. An application to outcome measures in medicine and discussion of possible objections concludes the chapter.


Sign in / Sign up

Export Citation Format

Share Document