scholarly journals Robustness of T-test Based on Skewness and Kurtosis

Author(s):  
Steven T. Garren ◽  
Kate McGann Osborne

Coverage probabilities of the two-sided one-sample t-test are simulated for some symmetric and right-skewed distributions. The symmetric distributions analyzed are Normal, Uniform, Laplace, and student-t with 5, 7, and 10 degrees of freedom. The right-skewed distributions analyzed are Exponential and Chi-square with 1, 2, and 3 degrees of freedom. Left-skewed distributions were not analyzed without loss of generality. The coverage probabilities for the symmetric distributions tend to achieve or just barely exceed the nominal values. The coverage probabilities for the skewed distributions tend to be too low, indicating high Type I error rates. Percentiles for the skewness and kurtosis statistics are simulated using Normal data. For sample sizes of 5, 10, 15 and 20 the skewness statistic does an excellent job of detecting non-Normal data, except for Uniform data. The kurtosis statistic also does an excellent job of detecting non-Normal data, including Uniform data. Examined herein are Type I error rates, but not power calculations. We nd that sample skewness is unhelpful when determining whether or not the t-test should be used, but low sample kurtosis is reason to avoid using the t-test.

1994 ◽  
Vol 19 (3) ◽  
pp. 275-291 ◽  
Author(s):  
James Algina ◽  
T. C. Oshima ◽  
Wen-Ying Lin

Type I error rates were estimated for three tests that compare means by using data from two independent samples: the independent samples t test, Welch’s approximate degrees of freedom test, and James’s second-order test. Type I error rates were estimated for skewed distributions, equal and unequal variances, equal and unequal sample sizes, and a range of total sample sizes. Welch’s test and James’s test have very similar Type I error rates and tend to control the Type I error rate as well or better than the independent samples t test does. The results provide guidance about the total sample sizes required for controlling Type I error rates.


2019 ◽  
Vol 3 (Supplement_1) ◽  
Author(s):  
Keisuke Ejima ◽  
Andrew Brown ◽  
Daniel Smith ◽  
Ufuk Beyaztas ◽  
David Allison

Abstract Objectives Rigor, reproducibility and transparency (RRT) awareness has expanded over the last decade. Although RRT can be improved from various aspects, we focused on type I error rates and power of commonly used statistical analyses testing mean differences of two groups, using small (n ≤ 5) to moderate sample sizes. Methods We compared data from five distinct, homozygous, monogenic, murine models of obesity with non-mutant controls of both sexes. Baseline weight (7–11 weeks old) was the outcome. To examine whether type I error rate could be affected by choice of statistical tests, we adjusted the empirical distributions of weights to ensure the null hypothesis (i.e., no mean difference) in two ways: Case 1) center both weight distributions on the same mean weight; Case 2) combine data from control and mutant groups into one distribution. From these cases, 3 to 20 mice were resampled to create a ‘plasmode’ dataset. We performed five common tests (Student's t-test, Welch's t-test, Wilcoxon test, permutation test and bootstrap test) on the plasmodes and computed type I error rates. Power was assessed using plasmodes, where the distribution of the control group was shifted by adding a constant value as in Case 1, but to realize nominal effect sizes. Results Type I error rates were unreasonably higher than the nominal significance level (type I error rate inflation) for Student's t-test, Welch's t-test and permutation especially when sample size was small for Case 1, whereas inflation was observed only for permutation for Case 2. Deflation was noted for bootstrap with small sample. Increasing sample size mitigated inflation and deflation, except for Wilcoxon in Case 1 because heterogeneity of weight distributions between groups violated assumptions for the purposes of testing mean differences. For power, a departure from the reference value was observed with small samples. Compared with the other tests, bootstrap was underpowered with small samples as a tradeoff for maintaining type I error rates. Conclusions With small samples (n ≤ 5), bootstrap avoided type I error rate inflation, but often at the cost of lower power. To avoid type I error rate inflation for other tests, sample size should be increased. Wilcoxon should be avoided because of heterogeneity of weight distributions between mutant and control mice. Funding Sources This study was supported in part by NIH and Japan Society for Promotion of Science (JSPS) KAKENHI grant.


2019 ◽  
Vol 97 (Supplement_2) ◽  
pp. 235-236
Author(s):  
Hilda Calderon Cartagena ◽  
Christopher I Vahl ◽  
Steve S Dritz

Abstract It is not unusual to come across randomized complete block designs (RCBD) replicated over a small number of sites in swine nutrition trials. For example, pens could be blocked by location or by initial body weight within three rooms or barns. One possibility is to analyze this design with the assumption of no treatment by site interaction which implies treatment differences are similar across all sites. This assumption might not always seem reasonable and site by treatment interaction could be included in the analysis to account for these differences should they exist. However, the site by treatment mean square becomes the error term for evaluating treatment. The objective of this study was to provide a recommendation of a practical strategy based on Type I error rates estimated from a simulation study. Scenarios with and without site by treatment interaction were considered with three sites and equal means across four treatments. The variance component for the error was set to 1 and the rest were either selected to be equal (σ2s = σ2b = σ2s*t =1) or one of them was set to 10. For the scenarios with no site by treatment interaction, σ2s*t = 0, for a total of 7 scenarios. Each scenario was simulated 10,000 times. For each simulation, both strategies were applied. The Kenward-Rodger approximation (KR) to the denominator degrees of freedom was also considered. Type I errors were estimated as the proportion of simulations with a significant treatment effect with α = 0.05. Overall, there was no evidence Type I error rates were inflated when the site by treatment interaction was omitted, even when σ2s*t = 10. The KR had no effect. In contrast, including the interaction term leads to a highly conservative Type I error rate far below the 5% level which results in a reduction of power; however, using KR mitigated the conservativeness.


2019 ◽  
Vol 14 (2) ◽  
pp. 399-425 ◽  
Author(s):  
Haolun Shi ◽  
Guosheng Yin

2014 ◽  
Vol 38 (2) ◽  
pp. 109-112 ◽  
Author(s):  
Daniel Furtado Ferreira

Sisvar is a statistical analysis system with a large usage by the scientific community to produce statistical analyses and to produce scientific results and conclusions. The large use of the statistical procedures of Sisvar by the scientific community is due to it being accurate, precise, simple and robust. With many options of analysis, Sisvar has a not so largely used analysis that is the multiple comparison procedures using bootstrap approaches. This paper aims to review this subject and to show some advantages of using Sisvar to perform such analysis to compare treatments means. Tests like Dunnett, Tukey, Student-Newman-Keuls and Scott-Knott are performed alternatively by bootstrap methods and show greater power and better controls of experimentwise type I error rates under non-normal, asymmetric, platykurtic or leptokurtic distributions.


Sign in / Sign up

Export Citation Format

Share Document