Bayesian Philosophy of Science
Latest Publications


TOTAL DOCUMENTS

14
(FIVE YEARS 14)

H-INDEX

0
(FIVE YEARS 0)

Published By Oxford University Press

9780199672110, 9780191881671

Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

In this final chapter, we look back on the results of the book and the methods we used. In particular, we enter a discussion whether Bayesian philosophy of science can and should be labeled a proper scientific philosophy due to its combination of formal, conceptual, and empirical methods. Finally, we explore the limitations of the book and we sketch projects for future research (e.g., integrating our results with social epistemology of science and the philosophy of statistical inference).


Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

Convincing scientific theories are often hard to find, especially when empirical evidence is scarce (e.g., in particle physics). Once scientists have found a theory, they often believe that there are not many distinct alternatives to it. Is this belief justified? We model how the failure to find a feasible alternative can increase the degree of belief in a scientific theory—in other words, we establish the validity of the No Alternatives Argument and the possibility of non-empirical theory confirmation from a Bayesian point of view. Then we evaluate scope and limits of this argument (e.g., by calculating the degree of confirmation it provides) and relate it to other argument forms such as Inference to the Best Explanation (IBE) or “There is No Alternative” (TINA).


Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

This chapter sets the stage for what follows, introducing the reader to the philosophical principles and the mathematical formalism behind Bayesian inference and its scientific applications. We explain and motivate the representation of graded epistemic attitudes (“degrees of belief”) by means of specific mathematical structures: probabilities. Then we show how these attitudes are supposed to change upon learning new evidence (“Bayesian Conditionalization”), and how all this relates to theory evaluation, action and decision-making. After sketching the different varieties of Bayesian inference, we present Causal Bayesian Networks as an intuitive graphical tool for making Bayesian inference and we give an overview over the contents of the book.


Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

How does Bayesian inference handle the highly idealized nature of many (statistical) models in science? The standard interpretation of probability as degree of belief in the truth of a model does not seem to apply in such cases since all candidate models are most probably wrong. Similarly, it is not clear how chance-credence coordination works for the probabilities generated by a statistical model. We solve these problems by developing a suppositional account of degree of belief where probabilities in scientific modeling are decoupled from our actual (unconditional) degrees of belief. This explains the normative pull of chance-credence coordination in Bayesian inference, uncovers the essentially counterfactual nature of reasoning with Bayesian models, and squares well with our intuitive judgment that statistical models provide “objective” probabilities.


Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

According to Popper and other influential philosophers and scientists, scientific knowledge grows by repeatedly testing our best hypotheses. However, the interpretation of non-significant results—those that do not lead to a “rejection” of the tested hypothesis—poses a major philosophical challenge. To what extent do they corroborate the tested hypothesis or provide a reason to accept it? In this chapter, we prove two impossibility results for measures of corroboration that follow Popper’s criterion of measuring both predictive success and the testability of a hypothesis. Then we provide an axiomatic characterization of a more promising and scientifically useful concept of corroboration and discuss implications for the practice of hypothesis testing and the concept of statistical significance.


Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

This chapter motivates why, and under which circumstances, the explanatory power of a scientific hypothesis with respect to a body of evidence can be explicated by means of statistical relevance. This account is traced back to its historic roots in Peirce and Hempel and defended against its critics (e.g., contrasting statistical relevance to purely causal accounts of explanation). Then we derive various Bayesian explications of explanatory power using the method of representation theorems and we compare their properties from a normative point of view. Finally we evaluate how such measures of explanatory power can ground a theory of Inference to the Best Explanation (IBE).


Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

In science, phenomena are often unexplained by the available scientific theories. At some point, it may be discovered that a novel theory accounts for this phenomenon—and this seems to confirm the theory because a persistent anomaly is resolved. However, Bayesian confirmation theory—primarily a theory for updating beliefs in the light of learning new information—struggles to describe confirmation by such cases of “old evidence”. We discuss the two main varieties of the Problem of Old Evidence (POE)—the static and the dynamic POE—, criticize existing solutions and develop two novel Bayesian models. They show how the discovery of explanatory and deductive relationships, or the absence of alternative explanations for the phenomenon in question, can confirm a theory. Finally, we assess the overall prospects of Bayesian Confirmation Theory in the light of the POE.


Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

Learning indicative conditionals and learning relative frequencies have one thing in common: they are examples of conditional evidence, that is, evidence that includes a suppositional element. Standard Bayesian theory does not describe how such evidence affects rational degrees of belief, and natural solutions run into major problems. We propose that conditional evidence is best modeled by a combination of two strategies: First, by generalizing Bayesian Conditionalization to minimizing an appropriate divergence between prior and posterior probability distribution. Second, by representing the relevant causal relations and the implied conditional independence relations in a Bayesian network that constrains both prior and posterior. We show that this approach solves several well-known puzzles about learning conditional evidence (e.g., the notorious Judy Benjamin problem) and that learning an indicative conditional can often be described adequately by conditionalizing on the associated material conditional.


Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

Subjective Bayesianism is often criticized for a lack of objectivity: (i) it opens the door to the influence of values and biases, (ii) evidence judgments can vary substantially between scientists, (iii) it is not suited for informing policy decisions. We rebut these concerns by bridging the debates on scientific objectivity and Bayesian inference in statistics. First, we show that the above concerns arise equally for frequentist statistical inference. Second, we argue that the involved senses of objectivity are epistemically inert. Third, we show that Subjective Bayesianism promotes other, epistemically relevant senses of scientific objectivity—most notably by increasing the transparency of scientific reasoning.


Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

Is simplicity a virtue of a good scientific theory, and are simpler theories more likely to be true or predictively successful? If so, how much should simplicity count vis-à-vis predictive accuracy? We address this question using Bayesian inference, focusing on the context of statistical model selection and an interpretation of simplicity via the degree of freedoms of a model. We rebut claims to prove the epistemic value of simplicity by means of showing its particular role in Bayesian model selection strategies (e.g., the BIC or the MML). Instead, we show that Bayesian inference in the context of model selection is usually done in a philosophically eclectic, instrumental fashion that is more tuned to practical applications than to philosophical foundations. Thus, these techniques cannot justify a particular “appropriate weight of simplicity in model selection”.


Sign in / Sign up

Export Citation Format

Share Document