scholarly journals Prediction Intervals for Reliability Growth Models with Small Sample Sizes

Author(s):  
John Quigley ◽  
Lesley Walls
2018 ◽  
Vol 8 (3) ◽  
pp. 246-271 ◽  
Author(s):  
Thomas Paul Talafuse ◽  
Edward A. Pohl

PurposeWhen performing system-level developmental testing, time and expenses generally warrant a small sample size for failure data. Upon failure discovery, redesigns and/or corrective actions can be implemented to improve system reliability. Current methods for estimating discrete (one-shot) reliability growth, namely the Crow (AMSAA) growth model, stipulate that parameter estimates have a great level of uncertainty when dealing with small sample sizes. The purpose of this paper is to present an application of a modified GM(1,1) model for handling system-level testing constrained by small sample sizes.Design/methodology/approachThe paper presents a methodology for incorporating failure data into a modified GM(1,1) model for systems with failures following a poly-Weibull distribution. Notional failure data are generated for complex systems and characterization of reliability growth parameters is performed via both the traditional AMSAA model and the GM(1,1) model for purposes of comparing and assessing performance.FindingsThe modified GM(1,1) model requires less complex computational effort and provides a more accurate prediction of reliability growth model parameters for small sample sizes and multiple failure modes when compared to the AMSAA model. It is especially superior to the AMSAA model in later stages of testing.Originality/valueThis research identifies cost-effective methods for developing more accurate reliability growth parameter estimates than those currently used.


2021 ◽  
Vol 12 ◽  
Author(s):  
Eunsoo Lee ◽  
Sehee Hong

Multilevel models have been developed for addressing data that come from a hierarchical structure. In particular, due to the increase of longitudinal studies, a three-level growth model is frequently used to measure the change of individuals who are nested in groups. In multilevel modeling, sufficient sample sizes are needed to obtain unbiased estimates and enough power to detect individual or group effects. However, there are few sample size guidelines for three-level growth models. Therefore, it is important that researchers recognize the possibility of unreliable results when sample sizes are small. The purpose of this study is to find adequate sample sizes for a three-level growth model under realistic conditions. A Monte Carlo simulation was performed under 12 conditions: (1) level-2 sample size (10, 30), (2) level-3 sample size (30, 50, 100) (3) intraclass correlation at level-3 (0.05, 0.15). The study examined the following outcomes: convergence rate, relative parameter bias, mean square error (MSE), 95% coverage rate and power. The results indicate that estimates of the regression coefficients are unbiased, but the variance component tends to be inaccurate with small sample sizes.


2021 ◽  
Author(s):  
Mutlu YAGANOGLU

Abstract The objective of this study was to estimate body weight of Morkaraman sheeps from body measurements with nonlinear models. Selected 110 sheeps 3-5 years were scored for body weight, body length, height at wither, chest width and pump width. For determine relationships with body weight between body measurements, correlation analysis was performed. The results of the correlation analysis indicated that the highest relationship according to the all sample sizes were body weight between body length (0.95, 0.90, 0.83, 0.81). Considering all parameters included in the model, the parameter showing the highest correlation with body weight was determined as body length according to all sample sizes. the highest correlation was found in 50 sample sizes (r:0.95). According to the small sample sizes (10-20), Logistic and Saturation growth models can be used to determine the body weight by using body length, on the other hand, Incomplete gamma model is more succesful to estimate body weight when sample size is nearly 30 and 50.


2018 ◽  
Author(s):  
Christopher Chabris ◽  
Patrick Ryan Heck ◽  
Jaclyn Mandart ◽  
Daniel Jacob Benjamin ◽  
Daniel J. Simons

Williams and Bargh (2008) reported that holding a hot cup of coffee caused participants to judge a person’s personality as warmer, and that holding a therapeutic heat pad caused participants to choose rewards for other people rather than for themselves. These experiments featured large effects (r = .28 and .31), small sample sizes (41 and 53 participants), and barely statistically significant results. We attempted to replicate both experiments in field settings with more than triple the sample sizes (128 and 177) and double-blind procedures, but found near-zero effects (r = –.03 and .02). In both cases, Bayesian analyses suggest there is substantially more evidence for the null hypothesis of no effect than for the original physical warmth priming hypothesis.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Florent Le Borgne ◽  
Arthur Chatton ◽  
Maxime Léger ◽  
Rémi Lenain ◽  
Yohann Foucher

AbstractIn clinical research, there is a growing interest in the use of propensity score-based methods to estimate causal effects. G-computation is an alternative because of its high statistical power. Machine learning is also increasingly used because of its possible robustness to model misspecification. In this paper, we aimed to propose an approach that combines machine learning and G-computation when both the outcome and the exposure status are binary and is able to deal with small samples. We evaluated the performances of several methods, including penalized logistic regressions, a neural network, a support vector machine, boosted classification and regression trees, and a super learner through simulations. We proposed six different scenarios characterised by various sample sizes, numbers of covariates and relationships between covariates, exposure statuses, and outcomes. We have also illustrated the application of these methods, in which they were used to estimate the efficacy of barbiturates prescribed during the first 24 h of an episode of intracranial hypertension. In the context of GC, for estimating the individual outcome probabilities in two counterfactual worlds, we reported that the super learner tended to outperform the other approaches in terms of both bias and variance, especially for small sample sizes. The support vector machine performed well, but its mean bias was slightly higher than that of the super learner. In the investigated scenarios, G-computation associated with the super learner was a performant method for drawing causal inferences, even from small sample sizes.


2013 ◽  
Vol 113 (1) ◽  
pp. 221-224 ◽  
Author(s):  
David R. Johnson ◽  
Lauren K. Bachan

In a recent article, Regan, Lakhanpal, and Anguiano (2012) highlighted the lack of evidence for different relationship outcomes between arranged and love-based marriages. Yet the sample size ( n = 58) used in the study is insufficient for making such inferences. This reply discusses and demonstrates how small sample sizes reduce the utility of this research.


Sign in / Sign up

Export Citation Format

Share Document