Model Selection and Model Averaging for Mixed-Effects Models with Crossed Random Effects for Subjects and Items

Author(s):  
José Á. Martínez-Huertas ◽  
Ricardo Olmos ◽  
Emilio Ferrer
2020 ◽  
Vol 12 (1) ◽  
pp. 114-137 ◽  
Author(s):  
CORRINE OCCHINO ◽  
BENJAMIN ANIBLE ◽  
JILL P. MORFORD

abstractIconicity has traditionally been considered an objective, fixed, unidimensional property of language forms, often operationalized as transparency for experimental purposes. Within a Cognitive Linguistics framework, iconicity is a mapping between an individual’s construal of form and construal of meaning, such that iconicity is subjective, dynamic, and multidimensional. We test the latter alternative by asking signers who differed in ASL proficiency to complete a handshape monitoring task in which we manipulated the number of form–meaning construals that target handshapes participated in. We estimated the interaction of iconicity, proficiency, and construal density using mixed-effects models for response time and accuracy with crossed random effects for participants and items.Results show a significant three-way interaction between iconicity, proficiency, and construal density such that less-proficient signers detected handshapes in more iconic signs faster than less iconic signs regardless of the handshape they were monitoring, but highly proficient signers’ performance was only improved by iconicity for handshapes that participate in many construals. Taken in conjunction with growing evidence of the subjectivity of iconicity, we interpret these results as support for the claim that construal is a core mechanism underlying iconicity, both for transparent and systematic language-internal form–meaning mappings.


2021 ◽  
Author(s):  
Daniel W. Heck ◽  
Florence Bockting

Bayes factors allow researchers to test the effects of experimental manipulations in within-subjects designs using mixed-effects models. van Doorn et al. (2021) showed that such hypothesis tests can be performed by comparing different pairs of models which vary in the specification of the fixed- and random-effect structure for the within-subjects factor. To discuss the question of which of these model comparisons is most appropriate, van Doorn et al. used a case study to compare the corresponding Bayes factors. We argue that researchers should not only focus on pairwise comparisons of two nested models but rather use the Bayes factor for performing model selection among a larger set of mixed models that represent different auxiliary assumptions. In a standard one-factorial, repeated-measures design, the comparison should include four mixed-effects models: fixed-effects H0, fixed-effects H1, random-effects H0, and random-effects H1. Thereby, the Bayes factor enables testing both the average effect of condition and the heterogeneity of effect sizes across individuals. Bayesian model averaging provides an inclusion Bayes factor which quantifies the evidence for or against the presence of an effect of condition while taking model-selection uncertainty about the heterogeneity of individual effects into account. We present a simulation study showing that model selection among a larger set of mixed models performs well in recovering the true, data-generating model.


2017 ◽  
Author(s):  
Mirko Thalmann ◽  
Marcel Niklaus ◽  
Klaus Oberauer

Using mixed-effects models and Bayesian statistics has been advocated by statisticians in recent years. Mixed-effects models allow researchers to adequately account for the structure in the data. Bayesian statistics – in contrast to frequentist statistics – can state the evidence in favor of or against an effect of interest. For frequentist statistical methods, it is known that mixed models can lead to serious over-estimation of evidence in favor of an effect (i.e., inflated Type-I error rate) when models fail to include individual differences in the effect sizes of predictors ("random slopes") that are actually present in the data. Here, we show through simulation that the same problem exists for Bayesian mixed models. Yet, at present there is no easy-to-use application that allows for the estimation of Bayes Factors for mixed models with random slopes on continuous predictors. Here, we close this gap by introducing a new R package called BayesRS. We tested its functionality in four simulation studies. They show that BayesRS offers a reliable and valid tool to compute Bayes Factors. BayesRS also allows users to account for correlations between random effects. In a fifth simulation study we show, however, that doing so leads to slight underestimation of the evidence in favor of an actually present effect. We only recommend modeling correlations between random effects when they are of primary interest and when sample size is large enough. BayesRS is available under https://cran.r-project.org/web/packages/BayesRS/, R code for all simulations is available under https://osf.io/nse5x/?view_only=b9a7caccd26a4764a084de3b8d459388


Sign in / Sign up

Export Citation Format

Share Document