When fixed and random effects mismatch: Implications for model comparisons
Mixed-effects models containing both fixed and random effects have become widely used in the cognitive sciences, as they are particularly appropriate for the analysis of clustered data. However, testing hypotheses in the presence of random effects is not completely straightforward, and a set of best practices for statistical inference in mixed-effects models is still lacking. Van Doorn et al. (2021) investigated how Bayesian hypothesis testing in mixed-effects models is impacted by particular model specifications. Here, we extend their work to the more complex case of models with three-level factorial predictors and, more generally, with multiple correlated predictors. We show how non-maximal models with correlated predictors contain 'mismatches' between fixed and random effects, in which the same predictor can refer to different effects in the fixed and random parts of a model. We then demonstrate though a series of Bayesian model comparisons that such mismatches can lead to inaccurate estimations of random variance, and in turn to biases in the assessment of evidence for the effect of interest. We present specific recommendations for how researchers can resolve mismatches or avoid them altogether: by fitting maximal models, eliminating correlations between predictors, or by residualising the random effects. Our results reinforce the observation that model comparisons with mixed-effects models can be surprisingly intricate and highlight that researchers should carefully and explicitly consider which hypotheses are being tested by each model comparison. Data and code are publicly available in an OSF repository at https://osf.io/njaup.