The Bayesian Theory of Confirmation, Idealizations and Approximations in Science
My focus in this paper is on how the basic Bayesian model can be amended to reflect the role of idealizations and approximations in the confirmation or disconfirmation of any hypothesis. I suggest the following as a plausible way of incorporating idealizations and approximations into the Bayesian condition for incremental confirmation: Theory T is confirmed by observation P relative to background knowledge B iff Pr(PΔ│T&(T&I ├ PT)&B) > Pr(PΔ│~T&(T&I├PT)andB), where I is the conjunction of idealizations and approximations used in deriving the prediction PT from T, P expresses the discrepancy between the prediction PT and the actual observation P, and ├ stands for logical entailment. This formulation has the virtue of explicitly taking into account the essential use made of idealizations and approximations as well as the fact that theoretically based predictions that utilize such assumptions will not, in general, exactly fit the data. A non-probabilistic analogue of the confirmation condition above that I offer avoids the 'old evidence problem,' which has been a headache for classical Bayesianism.