Asymptotic versus exact methods in the analysis of contingency tables: Evidence-based practical recommendations

2020 ◽  
Vol 29 (9) ◽  
pp. 2569-2582
Author(s):  
Miguel A García-Pérez ◽  
Vicente Núñez-Antón

Controversy over the validity of significance tests in the analysis of contingency tables is motivated by the disagreement between asymptotic and exact p values and its dependence on the magnitude of expected frequencies. Variants of Pearson’s X2 statistic and their asymptotic distributions were proposed to overcome the difficulties, but several approaches also exist to conduct exact tests. This paper shows that discrepant asymptotic and exact results may or may not occur whether expected frequencies are large or small: Eventual inaccuracy of asymptotic p values is instead caused by idiosyncrasies of the discrete distribution of X2. More importantly, discrepancies are also artificially created by the hypergeometric sampling model used to perform exact tests. Exact computations under the alternative full-multinomial or product-multinomial models require eliminating nuisance parameters and we propose a novel method that integrates them out. The resultant exact distributions are very accurately approximated by the asymptotic distribution, which eliminates concerns about the accuracy of the latter. We also discuss that the two-stage approach that tests for significance of residuals conditional on a significant X2 test is inadvisable and that an alternative single-stage test preserves Type-I error rates and further eliminates concerns about asymptotic accuracy.

Methodology ◽  
2015 ◽  
Vol 11 (2) ◽  
pp. 65-79 ◽  
Author(s):  
Geert H. van Kollenburg ◽  
Joris Mulder ◽  
Jeroen K. Vermunt

The application of latent class (LC) analysis involves evaluating the LC model using goodness-of-fit statistics. To assess the misfit of a specified model, say with the Pearson chi-squared statistic, a p-value can be obtained using an asymptotic reference distribution. However, asymptotic p-values are not valid when the sample size is not large and/or the analyzed contingency table is sparse. Another problem is that for various other conceivable global and local fit measures, asymptotic distributions are not readily available. An alternative way to obtain the p-value for the statistic of interest is by constructing its empirical reference distribution using resampling techniques such as the parametric bootstrap or the posterior predictive check (PPC). In the current paper, we show how to apply the parametric bootstrap and two versions of the PPC to obtain empirical p-values for a number of commonly used global and local fit statistics within the context of LC analysis. The main difference between the PPC using test statistics and the parametric bootstrap is that the former takes into account parameter uncertainty. The PPC using discrepancies has the advantage that it is computationally much less intensive than the other two resampling methods. In a Monte Carlo study we evaluated Type I error rates and power of these resampling methods when used for global and local goodness-of-fit testing in LC analysis. Results show that both the bootstrap and the PPC using test statistics are generally good alternatives to asymptotic p-values and can also be used when (asymptotic) distributions are not known. Nominal Type I error rates were not met when sample size was small and the contingency table has many cells. Overall the PPC using test statistics was somewhat more conservative than the parametric bootstrap. We have also replicated previous research suggesting that the Pearson χ2 statistic should in many cases be preferred over the likelihood-ratio G2 statistic. Power to reject a model for which the number of LCs was one less than in the population was very high, unless sample size was small. When the contingency tables are very sparse, the total bivariate residual (TBVR) statistic, which is based on bivariate relationships, still had very high power, signifying its usefulness in assessing model fit.


1982 ◽  
Vol 51 (3) ◽  
pp. 683-692
Author(s):  
John E. Overall ◽  
Stephen J. O'Keefe ◽  
Robert R. Starbuck

A method of controlling for the effects of a nuisance variable in testing the significance of treatment effects on a discrete binary response is described. Proportions of “success” responses in two treatment groups are standardized relative to an estimate of the sampling variance at each level of the concomitant variable, and an unweighted-means analysis of variance is used to test the main effect for treatments and the interaction of treatments × levels. Exact calculations and Monte Carlo results are presented which show the proposed F tests to have actual Type I error probabilities that are closer to the nominal alpha level than is true for alternative tests. The actual Type I error rates are less seriously affected by differences in marginal probabilities of “success” and “failure” responses than is true for other tests, and in the face of small cell frequencies the standardized-means analysis of variance appears to have substantially greater power than the other tests most commonly used with 2 × 2 × k contingency tables.


PLoS ONE ◽  
2020 ◽  
Vol 15 (11) ◽  
pp. e0242722
Author(s):  
Zhiming Li ◽  
Changxing Ma ◽  
Mingyao Ai

This paper proposes asymptotic and exact methods for testing the equality of correlations for multiple bilateral data under Dallal’s model. Three asymptotic test statistics are derived for large samples. Since they are not applicable to small data, several conditional and unconditional exact methods are proposed based on these three statistics. Numerical studies are conducted to compare all these methods with regard to type I error rates (TIEs) and powers. The results show that the asymptotic score test is the most robust, and two exact tests have satisfactory TIEs and powers. Some real examples are provided to illustrate the effectiveness of these tests.


2020 ◽  
Author(s):  
Keith Lohse ◽  
Kristin Sainani ◽  
J. Andrew Taylor ◽  
Michael Lloyd Butson ◽  
Emma Knight ◽  
...  

Magnitude-based inference (MBI) is a controversial statistical method that has been used in hundreds of papers in sports science despite criticism from statisticians. To better understand how this method has been applied in practice, we systematically reviewed 232 papers that used MBI. We extracted data on study design, sample size, and choice of MBI settings and parameters. Median sample size was 10 per group (interquartile range, IQR: 8 – 15) for multi-group studies and 14 (IQR: 10 – 24) for single-group studies; few studies reported a priori sample size calculations (15%). Authors predominantly applied MBI’s default settings and chose “mechanistic/non-clinical” rather than “clinical” MBI even when testing clinical interventions (only 14 studies out of 232 used clinical MBI). Using these data, we can estimate the Type I error rates for the typical MBI study. Authors frequently made dichotomous claims about effects based on the MBI criterion of a “likely” effect and sometimes based on the MBI criterion of a “possible” effect. When the sample size is n=8 to 15 per group, these inferences have Type I error rates of 12%-22% and 22%-45%, respectively. High Type I error rates were compounded by multiple testing: Authors reported results from a median of 30 tests related to outcomes; and few studies specified a primary outcome (14%). We conclude that MBI has promoted small studies, promulgated a “black box” approach to statistics, and led to numerous papers where the conclusions are not supported by the data. Amidst debates over the role of p-values and significance testing in science, MBI also provides an important natural experiment: we find no evidence that moving researchers away from p-values or null hypothesis significance testing makes them less prone to dichotomization or over-interpretation of findings.


2018 ◽  
Vol 8 (2) ◽  
pp. 58-71
Author(s):  
Richard L. Gorsuch ◽  
Curtis Lehmann

Approximations for Chi-square and F distributions can both be computed to provide a p-value, or probability of Type I error, to evaluate statistical significance. Although Chi-square has been used traditionally for tests of count data and nominal or categorical criterion variables (such as contingency tables) and F ratios for tests of non-nominal or continuous criterion variables (such as regression and analysis of variance), we demonstrate that either statistic can be applied in both situations. We used data simulation studies to examine when one statistic may be more accurate than the other for estimating Type I error rates across different types of analysis (count data/contingencies, dichotomous, and non-nominal) and across sample sizes (Ns) ranging from 20 to 160 (using 25,000 replications for simulating p-value derived from either Chi-squares or F-ratios). Our results showed that those derived from F ratios were generally closer to nominal Type I error rates than those derived from Chi-squares. The p-values derived from F ratios were more consistent for contingency table count data than those derived from Chi-squares. The smaller than 100 the N was, the more discrepant p-values derived from Chi-squares were from the nominal p-value. Only when the N was greater than 80 did the p-values from Chi-square tests become as accurate as those derived from F ratios in reproducing the nominal p-values. Thus, there was no evidence of any need for special treatment of dichotomous dependent variables. The most accurate and/or consistent p's were derived from F ratios. We conclude that Chi-square should be replaced generally with the F ratio as the statistic of choice and that the Chi-square test should only be taught as history.


2014 ◽  
Vol 53 (05) ◽  
pp. 343-343

We have to report marginal changes in the empirical type I error rates for the cut-offs 2/3 and 4/7 of Table 4, Table 5 and Table 6 of the paper “Influence of Selection Bias on the Test Decision – A Simulation Study” by M. Tamm, E. Cramer, L. N. Kennes, N. Heussen (Methods Inf Med 2012; 51: 138 –143). In a small number of cases the kind of representation of numeric values in SAS has resulted in wrong categorization due to a numeric representation error of differences. We corrected the simulation by using the round function of SAS in the calculation process with the same seeds as before. For Table 4 the value for the cut-off 2/3 changes from 0.180323 to 0.153494. For Table 5 the value for the cut-off 4/7 changes from 0.144729 to 0.139626 and the value for the cut-off 2/3 changes from 0.114885 to 0.101773. For Table 6 the value for the cut-off 4/7 changes from 0.125528 to 0.122144 and the value for the cut-off 2/3 changes from 0.099488 to 0.090828. The sentence on p. 141 “E.g. for block size 4 and q = 2/3 the type I error rate is 18% (Table 4).” has to be replaced by “E.g. for block size 4 and q = 2/3 the type I error rate is 15.3% (Table 4).”. There were only minor changes smaller than 0.03. These changes do not affect the interpretation of the results or our recommendations.


2021 ◽  
pp. 001316442199489
Author(s):  
Luyao Peng ◽  
Sandip Sinharay

Wollack et al. (2015) suggested the erasure detection index (EDI) for detecting fraudulent erasures for individual examinees. Wollack and Eckerly (2017) and Sinharay (2018) extended the index of Wollack et al. (2015) to suggest three EDIs for detecting fraudulent erasures at the aggregate or group level. This article follows up on the research of Wollack and Eckerly (2017) and Sinharay (2018) and suggests a new aggregate-level EDI by incorporating the empirical best linear unbiased predictor from the literature of linear mixed-effects models (e.g., McCulloch et al., 2008). A simulation study shows that the new EDI has larger power than the indices of Wollack and Eckerly (2017) and Sinharay (2018). In addition, the new index has satisfactory Type I error rates. A real data example is also included.


2001 ◽  
Vol 26 (1) ◽  
pp. 105-132 ◽  
Author(s):  
Douglas A. Powell ◽  
William D. Schafer

The robustness literature for the structural equation model was synthesized following the method of Harwell which employs meta-analysis as developed by Hedges and Vevea. The study focused on the explanation of empirical Type I error rates for six principal classes of estimators: two that assume multivariate normality (maximum likelihood and generalized least squares), elliptical estimators, two distribution-free estimators (asymptotic and others), and latent projection. Generally, the chi-square tests for overall model fit were found to be sensitive to non-normality and the size of the model for all estimators (with the possible exception of the elliptical estimators with respect to model size and the latent projection techniques with respect to non-normality). The asymptotic distribution-free (ADF) and latent projection techniques were also found to be sensitive to sample sizes. Distribution-free methods other than ADF showed, in general, much less sensitivity to all factors considered.


Sign in / Sign up

Export Citation Format

Share Document