Equivalence Testing for Multiple Regression
AbstractPsychological research is rife with inappropriately concluding lack of association or no effect between a predictor and the outcome in regression models following statistically nonsignificant results. This approach is methodologically flawed, however, because failing to reject the null hypothesis using traditional, difference-based tests does not mean the null is true (i.e., no relationship). This flawed methodology leads to high rates of incorrect conclusions that flood the literature. This thesis introduces a novel, methodologically sound alternative. I demonstrate how equivalence testing can be applied to evaluate whether a predictor has negligible effects on the outcome variable in multiple regression. I constructed a simulation study to evaluate the performance (i.e., power and error rates) of two equivalence-based tests and compared it to the common, but inappropriate, method of concluding no effect by failing to reject the null hypothesis of the traditional test. I further propose two R functions to accompany this thesis and supply researchers with open-access and easy-to-use tools that they can flexibly adopt in their own research. The use of the proposed equivalence-based methods and R functions is then illustrated using examples from the literature, and recommendations for results reporting and interpretations are discussed. My results demonstrate that using tests of equivalence instead of the traditional test is the appropriate statistical choice: Tests of equivalence show high rates of correct conclusions, especially with larger sample sizes, and low rates of incorrect conclusions, whereas the traditional method demonstrates unacceptably high incorrect conclusion rates.