Glossary of terms

Author(s):  
Janet L. Peacock ◽  
Philip J. Peacock

Analysis of variance See One-way analysis of variance (p. 280) and Two-way analysis of variance (p. 412) Bayes’s theorem A formula that allows the reversal of conditional probabilities (see Bayes’ theorem, p. 234) Bayesian statistics A statistical approach based on Bayes’ theorem, where prior information or beliefs are combined with new data to provide estimates of unknown parameters (see ...

2019 ◽  
pp. 83-100
Author(s):  
Steven J. Osterlind

This chapter discusses evidence and probability data with particular attention on Bayesian estimation. The Protestant ethic slowed probability developments in the United States, but the idea of quantification continued apace in England and on the Continent. In particular, Thomas Bayes invented a simple but profound mathematical means to connect outcomes with causes with conditional probabilities and Bayesian estimation. The chapter explains conditional probabilities and Bayesian logic, giving several examples, including incidence of accurate cancer diagnosis with inexact diagnostics. The chapter also introduces Bayes’s magnum opus An Essay Toward Solving a Problem in the Doctrine of Chances and gives his example of rolling billiard balls on a billiard table to show Bayes’s theorem.


Bayes' theorem is a tool for assessing how probable evidence makes some hypothesis. The papers in this book consider the worth and applicability of the theorem. The book sets out the philosophical issues: Elliott Sober argues that there are other criteria for assessing hypotheses; Colin Howson, Philip Dawid, and John Earman consider how the theorem can be used in statistical science, in weighing evidence in criminal trials, and in assessing evidence for the occurrence of miracles; and David Miller argues for the worth of the probability calculus as a tool for measuring propensities in nature rather than the strength of evidence. The book ends with the original paper containing the theorem, presented to the Royal Society in 1763.


2017 ◽  
Vol 7 (1) ◽  
pp. 21
Author(s):  
Marco Dall'Aglio ◽  
Theodore P. Hill

It is well known that the classical Bayesian posterior arises naturally as the unique solution of different optimization problems, without the necessity of interpreting data as conditional probabilities and then using Bayes' Theorem. Here it is shown that the Bayesian posterior is also the unique minimax optimizer of the loss of self-information in combining the prior and the likelihood distributions, and is the unique proportional consolidation of the same distributions. These results, direct corollaries of recent results about conflations of probability distributions, further reinforce the use of Bayesian posteriors, and may help partially reconcile some of the differences between classical and Bayesian statistics.


2011 ◽  
Vol 58-60 ◽  
pp. 1018-1024
Author(s):  
Feng Ye ◽  
Gui Chen Xu ◽  
Di Kang Zhu

This paper reviews several current methods of calculating buffer on the basis of pointing out each merits and pitfalls and then introduces Bayesian statistical approach to CCS / BM domain to calculate the size of the project buffer, to overcome that the current method of the buffer calculation is too subjective and the defect on lacking of practical application. In Crystal Ball, we compare the simulation results of implementation process on the benchmark of C&PM, RESM and SM. The results show that the buffer using this method can ensure the stability of the project’s completion probability, and this method has great flexibility.


Biometrika ◽  
1962 ◽  
Vol 49 (3/4) ◽  
pp. 419 ◽  
Author(s):  
G. E. P. Box ◽  
G. C. Tiao

Sign in / Sign up

Export Citation Format

Share Document