On the relation between direct and inverse methods in statistics
1—In my “Scientific Inference” (1931) I gave a discussion of the posterior probability of a parameter based on a series of observations derived from the normal law of errors, when the parameter and its standard error are originally unknown. I afterwards (1932) generalized the result to the case of several unknowns. Dr J. Wishart pointed out to me that the form that I obtained by using the principle of inverse probability is identical with one obtained by direct methods by “Student” (1908). A formula identical with my general one was also given by T. E. Sterne (1934). The direct methods, however, deal with a different problem from the inverse one. They give results about the probability of the observations, or certain functions of them, the true values and the standard error being taken as known; the practical problem is usually to estimate the true value, the observations being known, and this is the problem treated by the inverse method. As the data and the propositions whose probabilities on the data are to be assessed are interchanged in the two cases it appeared to me that the identity of the results in form must be accidental. It turns out, however, that there is a definite reason why they should be identical, and that this throws a light on the use of direct methods for estimation and on their relation to the theory of probability. Suppose that the true value and the standard error are x and σ; the observed values are n in number with mean x̅ and standard deviation σ'. Then my result (1931, p. 69) was, for previous knowledge k expressing the truth of the normal law but nothing about x and σ, P ( dx | x̅ , σ', k ) ∝ {1 + ( x - x̅ ) 2 /σ' 2 } -½ n dx ; (1)