scholarly journals Bayesian Inference under Small Sample Sizes Using General Noninformative Priors

Mathematics ◽  
2021 ◽  
Vol 9 (21) ◽  
pp. 2810
Author(s):  
Jingjing He ◽  
Wei Wang ◽  
Min Huang ◽  
Shaohua Wang ◽  
Xuefei Guan

This paper proposes a Bayesian inference method for problems with small sample sizes. A general type of noninformative prior is proposed to formulate the Bayesian posterior. It is shown that this type of prior can represent a broad range of priors such as classical noninformative priors and asymptotically locally invariant priors and can be derived as the limiting states of normal-inverse-Gamma conjugate priors, allowing for analytical evaluations of Bayesian posteriors and predictors. The performance of different noninformative priors under small sample sizes is compared using the likelihood combining both fitting and prediction performances. Laplace approximation is used to evaluate the likelihood. A realistic fatigue reliability problem was used to illustrate the method. Following that, an actual aeroengine disk lifing application with two test samples is presented, and the results are compared with the existing method.

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Kang Li ◽  
Xian-ming Shi ◽  
Juan Li ◽  
Mei Zhao ◽  
Chunhua Zeng

In view of the small sample size of combat ammunition trial data and the difficulty of forecasting the demand for combat ammunition, a Bayesian inference method based on multinomial distribution is proposed. Firstly, considering the different damage grades of ammunition hitting targets, the damage results are approximated as multinomial distribution, and a Bayesian inference model of ammunition demand based on multinomial distribution is established, which provides a theoretical basis for forecasting the ammunition demand of multigrade damage under the condition of small samples. Secondly, the conjugate Dirichlet distribution of multinomial distribution is selected as a prior distribution, and Dempster–Shafer evidence theory (D-S theory) is introduced to fuse multisource previous information. Bayesian inference is made through the Markov chain Monte Carlo method based on Gibbs sampling, and ammunition demand at different damage grades is obtained by referring to cumulative damage probability. The study result shows that the Bayesian inference method based on multinomial distribution is highly maneuverable and can be used to predict ammunition demand of different damage grades under the condition of small samples.


2018 ◽  
Author(s):  
Christopher Chabris ◽  
Patrick Ryan Heck ◽  
Jaclyn Mandart ◽  
Daniel Jacob Benjamin ◽  
Daniel J. Simons

Williams and Bargh (2008) reported that holding a hot cup of coffee caused participants to judge a person’s personality as warmer, and that holding a therapeutic heat pad caused participants to choose rewards for other people rather than for themselves. These experiments featured large effects (r = .28 and .31), small sample sizes (41 and 53 participants), and barely statistically significant results. We attempted to replicate both experiments in field settings with more than triple the sample sizes (128 and 177) and double-blind procedures, but found near-zero effects (r = –.03 and .02). In both cases, Bayesian analyses suggest there is substantially more evidence for the null hypothesis of no effect than for the original physical warmth priming hypothesis.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Florent Le Borgne ◽  
Arthur Chatton ◽  
Maxime Léger ◽  
Rémi Lenain ◽  
Yohann Foucher

AbstractIn clinical research, there is a growing interest in the use of propensity score-based methods to estimate causal effects. G-computation is an alternative because of its high statistical power. Machine learning is also increasingly used because of its possible robustness to model misspecification. In this paper, we aimed to propose an approach that combines machine learning and G-computation when both the outcome and the exposure status are binary and is able to deal with small samples. We evaluated the performances of several methods, including penalized logistic regressions, a neural network, a support vector machine, boosted classification and regression trees, and a super learner through simulations. We proposed six different scenarios characterised by various sample sizes, numbers of covariates and relationships between covariates, exposure statuses, and outcomes. We have also illustrated the application of these methods, in which they were used to estimate the efficacy of barbiturates prescribed during the first 24 h of an episode of intracranial hypertension. In the context of GC, for estimating the individual outcome probabilities in two counterfactual worlds, we reported that the super learner tended to outperform the other approaches in terms of both bias and variance, especially for small sample sizes. The support vector machine performed well, but its mean bias was slightly higher than that of the super learner. In the investigated scenarios, G-computation associated with the super learner was a performant method for drawing causal inferences, even from small sample sizes.


2013 ◽  
Vol 113 (1) ◽  
pp. 221-224 ◽  
Author(s):  
David R. Johnson ◽  
Lauren K. Bachan

In a recent article, Regan, Lakhanpal, and Anguiano (2012) highlighted the lack of evidence for different relationship outcomes between arranged and love-based marriages. Yet the sample size ( n = 58) used in the study is insufficient for making such inferences. This reply discusses and demonstrates how small sample sizes reduce the utility of this research.


2016 ◽  
Vol 41 (5) ◽  
pp. 472-505 ◽  
Author(s):  
Elizabeth Tipton ◽  
Kelly Hallberg ◽  
Larry V. Hedges ◽  
Wendy Chan

Background: Policy makers and researchers are frequently interested in understanding how effective a particular intervention may be for a specific population. One approach is to assess the degree of similarity between the sample in an experiment and the population. Another approach is to combine information from the experiment and the population to estimate the population average treatment effect (PATE). Method: Several methods for assessing the similarity between a sample and population currently exist as well as methods estimating the PATE. In this article, we investigate properties of six of these methods and statistics in the small sample sizes common in education research (i.e., 10–70 sites), evaluating the utility of rules of thumb developed from observational studies in the generalization case. Result: In small random samples, large differences between the sample and population can arise simply by chance and many of the statistics commonly used in generalization are a function of both sample size and the number of covariates being compared. The rules of thumb developed in observational studies (which are commonly applied in generalization) are much too conservative given the small sample sizes found in generalization. Conclusion: This article implies that sharp inferences to large populations from small experiments are difficult even with probability sampling. Features of random samples should be kept in mind when evaluating the extent to which results from experiments conducted on nonrandom samples might generalize.


Sign in / Sign up

Export Citation Format

Share Document