Mixed Effects Models: Hierarchical APC-Cross-Classified Random Effects Models (HAPC-CCREM), Part I: The Basics

2016 ◽  
pp. 191-230
Author(s):  
Yang Yang ◽  
Kenneth C. Land
2017 ◽  
Author(s):  
Mirko Thalmann ◽  
Marcel Niklaus ◽  
Klaus Oberauer

Using mixed-effects models and Bayesian statistics has been advocated by statisticians in recent years. Mixed-effects models allow researchers to adequately account for the structure in the data. Bayesian statistics – in contrast to frequentist statistics – can state the evidence in favor of or against an effect of interest. For frequentist statistical methods, it is known that mixed models can lead to serious over-estimation of evidence in favor of an effect (i.e., inflated Type-I error rate) when models fail to include individual differences in the effect sizes of predictors ("random slopes") that are actually present in the data. Here, we show through simulation that the same problem exists for Bayesian mixed models. Yet, at present there is no easy-to-use application that allows for the estimation of Bayes Factors for mixed models with random slopes on continuous predictors. Here, we close this gap by introducing a new R package called BayesRS. We tested its functionality in four simulation studies. They show that BayesRS offers a reliable and valid tool to compute Bayes Factors. BayesRS also allows users to account for correlations between random effects. In a fifth simulation study we show, however, that doing so leads to slight underestimation of the evidence in favor of an actually present effect. We only recommend modeling correlations between random effects when they are of primary interest and when sample size is large enough. BayesRS is available under https://cran.r-project.org/web/packages/BayesRS/, R code for all simulations is available under https://osf.io/nse5x/?view_only=b9a7caccd26a4764a084de3b8d459388


Biometrics ◽  
2010 ◽  
Vol 67 (2) ◽  
pp. 495-503 ◽  
Author(s):  
Joseph G. Ibrahim ◽  
Hongtu Zhu ◽  
Ramon I. Garcia ◽  
Ruixin Guo

2015 ◽  
Vol 26 (4) ◽  
pp. 1838-1853 ◽  
Author(s):  
Dongyuan Xing ◽  
Yangxin Huang ◽  
Henian Chen ◽  
Yiliang Zhu ◽  
Getachew A Dagne ◽  
...  

Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.


2021 ◽  
Author(s):  
João Veríssimo

Mixed-effects models containing both fixed and random effects have become widely used in the cognitive sciences, as they are particularly appropriate for the analysis of clustered data. However, testing hypotheses in the presence of random effects is not completely straightforward, and a set of best practices for statistical inference in mixed-effects models is still lacking. Van Doorn et al. (2021) investigated how Bayesian hypothesis testing in mixed-effects models is impacted by particular model specifications. Here, we extend their work to the more complex case of models with three-level factorial predictors and, more generally, with multiple correlated predictors. We show how non-maximal models with correlated predictors contain 'mismatches' between fixed and random effects, in which the same predictor can refer to different effects in the fixed and random parts of a model. We then demonstrate though a series of Bayesian model comparisons that such mismatches can lead to inaccurate estimations of random variance, and in turn to biases in the assessment of evidence for the effect of interest. We present specific recommendations for how researchers can resolve mismatches or avoid them altogether: by fitting maximal models, eliminating correlations between predictors, or by residualising the random effects. Our results reinforce the observation that model comparisons with mixed-effects models can be surprisingly intricate and highlight that researchers should carefully and explicitly consider which hypotheses are being tested by each model comparison. Data and code are publicly available in an OSF repository at https://osf.io/njaup.


Sign in / Sign up

Export Citation Format

Share Document