scholarly journals When fixed and random effects mismatch: Implications for model comparisons

2021 ◽  
Author(s):  
João Veríssimo

Mixed-effects models containing both fixed and random effects have become widely used in the cognitive sciences, as they are particularly appropriate for the analysis of clustered data. However, testing hypotheses in the presence of random effects is not completely straightforward, and a set of best practices for statistical inference in mixed-effects models is still lacking. Van Doorn et al. (2021) investigated how Bayesian hypothesis testing in mixed-effects models is impacted by particular model specifications. Here, we extend their work to the more complex case of models with three-level factorial predictors and, more generally, with multiple correlated predictors. We show how non-maximal models with correlated predictors contain 'mismatches' between fixed and random effects, in which the same predictor can refer to different effects in the fixed and random parts of a model. We then demonstrate though a series of Bayesian model comparisons that such mismatches can lead to inaccurate estimations of random variance, and in turn to biases in the assessment of evidence for the effect of interest. We present specific recommendations for how researchers can resolve mismatches or avoid them altogether: by fitting maximal models, eliminating correlations between predictors, or by residualising the random effects. Our results reinforce the observation that model comparisons with mixed-effects models can be surprisingly intricate and highlight that researchers should carefully and explicitly consider which hypotheses are being tested by each model comparison. Data and code are publicly available in an OSF repository at https://osf.io/njaup.

Biometrics ◽  
2010 ◽  
Vol 67 (2) ◽  
pp. 495-503 ◽  
Author(s):  
Joseph G. Ibrahim ◽  
Hongtu Zhu ◽  
Ramon I. Garcia ◽  
Ruixin Guo

2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Maud Delattre ◽  
Marie-Anne Poursat

AbstractWe consider joint selection of fixed and random effects in general mixed-effects models. The interpretation of estimated mixed-effects models is challenging since changing the structure of one set of effects can lead to different choices of important covariates in the model. We propose a stepwise selection algorithm to perform simultaneous selection of the fixed and random effects. It is based on Bayesian Information criteria whose penalties are adapted to mixed-effects models. The proposed procedure performs model selection in both linear and nonlinear models. It should be used in the low-dimension setting where the number of ovariates and the number of random effects are moderate with respect to the total number of observations. The performance of the algorithm is assessed via a simulation study, which includes also a comparative study with alternatives when available in the literature. The use of the method is illustrated in the clinical study of an antibiotic agent kinetics.


2018 ◽  
Author(s):  
Dale Barr ◽  
Roger Philip Levy ◽  
Christoph Scheepers ◽  
Harry Tily

Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the standards that have been in place for many decades. Through theoretical arguments and Monte Carlo simulation, we show that LMEMs generalize best when they include the maximal random effects structure justified by the design. The generalization performance of LMEMs including data-driven random effects structures strongly depends upon modeling criteria and sample size, yielding reasonable results on moderately-sized samples when conservative criteria are used, but with little or no power advantage over maximal models. Finally, random-intercepts-only LMEMs used on within-subjects and/or within-items data from populations where subjects and/or items vary in their sensitivity to experimental manipulations always generalize worse than separate F1 and F2 tests, and in many cases, even worse than F1 alone. Maximal LMEMs should be the ‘gold standard’ for confirmatory hypothesis testing in psycholinguistics and beyond.


2017 ◽  
Author(s):  
Mirko Thalmann ◽  
Marcel Niklaus ◽  
Klaus Oberauer

Using mixed-effects models and Bayesian statistics has been advocated by statisticians in recent years. Mixed-effects models allow researchers to adequately account for the structure in the data. Bayesian statistics – in contrast to frequentist statistics – can state the evidence in favor of or against an effect of interest. For frequentist statistical methods, it is known that mixed models can lead to serious over-estimation of evidence in favor of an effect (i.e., inflated Type-I error rate) when models fail to include individual differences in the effect sizes of predictors ("random slopes") that are actually present in the data. Here, we show through simulation that the same problem exists for Bayesian mixed models. Yet, at present there is no easy-to-use application that allows for the estimation of Bayes Factors for mixed models with random slopes on continuous predictors. Here, we close this gap by introducing a new R package called BayesRS. We tested its functionality in four simulation studies. They show that BayesRS offers a reliable and valid tool to compute Bayes Factors. BayesRS also allows users to account for correlations between random effects. In a fifth simulation study we show, however, that doing so leads to slight underestimation of the evidence in favor of an actually present effect. We only recommend modeling correlations between random effects when they are of primary interest and when sample size is large enough. BayesRS is available under https://cran.r-project.org/web/packages/BayesRS/, R code for all simulations is available under https://osf.io/nse5x/?view_only=b9a7caccd26a4764a084de3b8d459388


Sign in / Sign up

Export Citation Format

Share Document