Analyzing Group Data

2021 ◽  
pp. 121-142
Author(s):  
Charles Auerbach

This chapter covers tests of statistical significance that can be used to compare data across phases. These are used to determine whether observed outcomes are likely the result of an intervention or, more likely, the result of chance. The purpose of a statistical test is to determine how likely it is that the analyst is making an incorrect decision by rejecting the null hypothesis and accepting the alternative one. A number of tests of significance are presented in this chapter: statistical process control charts (SPCs), proportion/frequency, chi-square, the conservative dual criteria (CDC), robust conservative dual criteria (RCDC), the t test, and analysis of variance (ANOVA). How and when to use each of these are also discussed. The method for transforming autocorrelated data and merging data sets is discussed. Once new data sets are created using the Append() function, they can be tested for Type I error using the techniques discussed in the chapter.

2021 ◽  
pp. 90-120
Author(s):  
Charles Auerbach

This chapter covers tests of statistical significance that can be used to compare data across phases. These are used to determine whether observed outcomes are likely the result of an intervention or, more likely, the result of sampling error or chance. The purpose of a statistical test is to determine how likely it is that the analyst is making an incorrect decision by rejecting the null hypothesis, that there is no difference between compared phases, and accepting the alternative one, that true differences exist. A number of tests of significance are presented in this chapter: statistical process control charts (SPCs), proportion/frequency, chi-square, the conservative dual criteria (CDC), robust conservative dual criteria (RCDC), the t test, and analysis of variance (ANOVA). How and when to use each of these are also discussed, and examples are provided to illustrate each. The method for transforming autocorrelated data and merging data sets is discussed further in the context of utilizing transformed data sets to test of Type 1 error.


2020 ◽  
Vol 42 (15) ◽  
pp. 3002-3011
Author(s):  
Hasan Rasay ◽  
Hossein Arshad

There exist many processes where the quality characteristic does not follow a normal distribution, and the conditions for the application of central limit theorem are not satisfied; for example, because collecting data in a subgroup is impossible or the distribution is highly skewed. Thus, researchers have developed the control charts according to the specific distribution that models the quality characteristic. In this paper, some control charts are designed to monitor an exponentially distributed lifetime. The life testing is conducted according to the failure censoring while during the test; once observing a failure item, it is replaced by a new one so that the total number of items inspected during the test remains constant. Under the condition of the test, it is discussed that the elapsed time until observing the r’th failure has Erlang distribution. According to the relation of Erlang and chi-square distributions, the chart limits are computed to satisfy a specified value of type I error. Examples are presented and the curves of average run length are derived for the one-sided and two-sided control charts. Also, a comparative study is conducted to show the performance and superiority of the proposed control charts.


2021 ◽  
pp. 18-30
Author(s):  
Charles Auerbach

In this chapter, readers are given step-by-step instructions on how to access the software necessary to use SSD for R. They are also presented with a brief overview of the capabilities of the SSD for R package. These include basic graphing functions, descriptive statistics, many effect size functions, autocorrelation, regression, statistical process control charts, hypothesis testing, and functions associated with analyzing group data. In combination, R, RStudio, and SSD for R, all of which are freely available, provide a robust way to analyze single-system research data. This chapter demonstrates how to download the necessary software and provides an overview of the visual and statistical capability available with SSD for R.


2018 ◽  
Vol 8 (2) ◽  
pp. 58-71
Author(s):  
Richard L. Gorsuch ◽  
Curtis Lehmann

Approximations for Chi-square and F distributions can both be computed to provide a p-value, or probability of Type I error, to evaluate statistical significance. Although Chi-square has been used traditionally for tests of count data and nominal or categorical criterion variables (such as contingency tables) and F ratios for tests of non-nominal or continuous criterion variables (such as regression and analysis of variance), we demonstrate that either statistic can be applied in both situations. We used data simulation studies to examine when one statistic may be more accurate than the other for estimating Type I error rates across different types of analysis (count data/contingencies, dichotomous, and non-nominal) and across sample sizes (Ns) ranging from 20 to 160 (using 25,000 replications for simulating p-value derived from either Chi-squares or F-ratios). Our results showed that those derived from F ratios were generally closer to nominal Type I error rates than those derived from Chi-squares. The p-values derived from F ratios were more consistent for contingency table count data than those derived from Chi-squares. The smaller than 100 the N was, the more discrepant p-values derived from Chi-squares were from the nominal p-value. Only when the N was greater than 80 did the p-values from Chi-square tests become as accurate as those derived from F ratios in reproducing the nominal p-values. Thus, there was no evidence of any need for special treatment of dichotomous dependent variables. The most accurate and/or consistent p's were derived from F ratios. We conclude that Chi-square should be replaced generally with the F ratio as the statistic of choice and that the Chi-square test should only be taught as history.


Genetics ◽  
2002 ◽  
Vol 160 (3) ◽  
pp. 1113-1122
Author(s):  
A F McRae ◽  
J C McEwan ◽  
K G Dodds ◽  
T Wilson ◽  
A M Crawford ◽  
...  

Abstract The last decade has seen a dramatic increase in the number of livestock QTL mapping studies. The next challenge awaiting livestock geneticists is to determine the actual genes responsible for variation of economically important traits. With the advent of high density single nucleotide polymorphism (SNP) maps, it may be possible to fine map genes by exploiting linkage disequilibrium between genes of interest and adjacent markers. However, the extent of linkage disequilibrium (LD) is generally unknown for livestock populations. In this article microsatellite genotype data are used to assess the extent of LD in two populations of domestic sheep. High levels of LD were found to extend for tens of centimorgans and declined as a function of marker distance. However, LD was also frequently observed between unlinked markers. The prospects for LD mapping in livestock appear encouraging provided that type I error can be minimized. Properties of the multiallelic LD coefficient D′ were also explored. D′ was found to be significantly related to marker heterozygosity, although the relationship did not appear to unduly influence the overall conclusions. Of potentially greater concern was the observation that D′ may be skewed when rare alleles are present. It is recommended that the statistical significance of LD is used in conjunction with coefficients such as D′ to determine the true extent of LD.


2021 ◽  
pp. 019459982110133
Author(s):  
Ellen S. Deutsch ◽  
Sonya Malekzadeh ◽  
Cecelia E. Schmalbach

Simulation training has taken a prominent role in otolaryngology–head and neck surgery (OTO-HNS) as a means to ensure patient safety and quality improvement (PS/QI). While it is often equated to resident training, this tool has value in lifelong learning and extends beyond the individual otolaryngologists to include simulation-based learning for teams and health systems processes. Part III of this PS/QI primer provides an overview of simulation in medicine and specific applications within the field of OTO-HNS. The impact of simulation on PS/QI will be presented in an evidence-based fashion to include the use of run and statistical process control charts to assess the impact of simulation-guided initiatives. Last, steps in developing a simulation program focused on PS/QI will be outlined with future opportunities for OTO-HNS simulation.


2001 ◽  
Vol 26 (1) ◽  
pp. 105-132 ◽  
Author(s):  
Douglas A. Powell ◽  
William D. Schafer

The robustness literature for the structural equation model was synthesized following the method of Harwell which employs meta-analysis as developed by Hedges and Vevea. The study focused on the explanation of empirical Type I error rates for six principal classes of estimators: two that assume multivariate normality (maximum likelihood and generalized least squares), elliptical estimators, two distribution-free estimators (asymptotic and others), and latent projection. Generally, the chi-square tests for overall model fit were found to be sensitive to non-normality and the size of the model for all estimators (with the possible exception of the elliptical estimators with respect to model size and the latent projection techniques with respect to non-normality). The asymptotic distribution-free (ADF) and latent projection techniques were also found to be sensitive to sample sizes. Distribution-free methods other than ADF showed, in general, much less sensitivity to all factors considered.


Sign in / Sign up

Export Citation Format

Share Document