scholarly journals Bayes factor between Student t and Gaussian mixed models within an animal breeding context

2008 ◽  
Vol 40 (4) ◽  
pp. 395 ◽  
Author(s):  
Joaquim Casellas ◽  
Noelia Ibáñez-Escriche ◽  
Luis García-Cortés ◽  
Luis Varona
2017 ◽  
Author(s):  
Julia M. Haaf ◽  
Jeffrey Rouder

Model comparison in Bayesian mixed models is becoming popular in psychological science. Here we develop a set of nested models that account for order restrictions across individuals in psychological tasks. An order-restricted model addresses the question 'Does Everybody', as in, 'Does everybody show the usual Stroop effect', or ‘Does everybody respond more quickly to intense noises than subtle ones.’ The crux of the modeling is the instantiation of 10s or 100s of order restrictions simultaneously, one for each participant. To our knowledge, the problem is intractable in frequentist contexts but relatively straightforward in Bayesian ones. We develop a Bayes factor model-comparison strategy using Zellner and colleagues’ default g-priors appropriate for assessing whether effects obey equality and order restrictions. We apply the methodology to seven data sets from Stroop, Simon, and Eriksen interference tasks. Not too surprisingly, we find that everybody Stroops—that is, for all people congruent colors are truly named more quickly than incongruent ones. But, perhaps surprisingly, we find these order constraints are violated for some people in the Simon task, that is, for these people spatially incongruent responses occur truly more quickly than congruent ones! Implications of the modeling and conjectures about the task-related differences are discussed.This paper was written in R-Markdown with code for data analysis integrated into the text. The Markdown script isopen and freely available at https://github.com/PerceptionAndCognitionLab/ctx-indiff. The data are also open and freely available at https://github.com/PerceptionCognitionLab/data0/tree/master/contexteffects.


2018 ◽  
Author(s):  
Giovanny Covarrubias-Pazaran

AbstractIn the last decade the use of mixed models has become a pivotal part in the implementation of genome-assisted prediction in plant and animal breeding programs. Exploiting the use genetic correlation among traits through multivariate predictions has been proposed in recent years as a way to boost prediction accuracy and understand pleiotropy and other genetic and ecological phenomena better. Multiple mixed model solvers able to use relationship matrices or deal with marker-based incidence matrices have been released in the last years but multivariate versions are scarse. Such solvers have become quite popular in plant and animal breeding thanks to user-friendly platforms such as R. Among such software one of the most recent and popular is the sommer package. In this short communication we discuss the update of the package that is able to run multivariate mixed models with multiple random effects and different covariance structures at the level of random effects and trait-to-trait covariance along with other functionalities for genetic analysis and field trial analysis to enhance the genome-assisted prediction capabilities of researchers.


Author(s):  
Johnny van Doorn ◽  
Frederik Aust ◽  
Julia M. Haaf ◽  
Angelika M. Stefan ◽  
Eric-Jan Wagenmakers

AbstractAlthough Bayesian linear mixed effects models are increasingly popular for analysis of within-subject designs in psychology and other fields, there remains considerable ambiguity on the most appropriate Bayes factor hypothesis test to quantify the degree to which the data support the presence or absence of an experimental effect. Specifically, different choices for both the null model and the alternative model are possible, and each choice constitutes a different definition of an effect resulting in a different test outcome. We outline the common approaches and focus on the impact of aggregation, the effect of measurement error, the choice of prior distribution, and the detection of interactions. For concreteness, three example scenarios showcase how seemingly innocuous choices can lead to dramatic differences in statistical evidence. We hope this work will facilitate a more explicit discussion about best practices in Bayes factor hypothesis testing in mixed models.


2019 ◽  
Author(s):  
Donald Ray Williams ◽  
Jeffrey Rouder ◽  
Philippe Rast

Mixed-effects models are becoming common in psychological science. Although they have many desirable features, there is still untapped potential that has not yet been fully realized. It is customary to view homogeneous variance as an assumption to satisfy. We argue to move beyond that perspective, and to view modeling within-person variance (``noise'') as an opportunity to gain a richer understanding of psychological processes. This can provide important insights into behavioral (in)stability. The technique to do so is based on the mixed-effects location scale model. The formulation can simultaneously estimate mixed-effects sub-models to both the mean (location) and within-person variance (scale) for clustered data common to psychology. We develop a framework that goes beyond assessing the sub-models in isolation of one another, and allows for testing structural relations between the mean and within-person variance with the Bayes factor. We first present a motivating example, which makes clear how the model can characterize mean--variance relations. We then apply the method to reaction times gathered from two cognitive inference tasks. We find there are more individual differences in the within-person variance than the mean structure, as well as a complex web of structural mean--variance relations in the random effects. This stands in contrast to the dominant view of within-person variance--i.e., measurement ``error'' or ``noise.'' The results also point towards paradoxical within-person, as opposed to between-person, effects. That is, in both tasks, several people had \emph{slower} and \emph{less} variable incongruent responses. This contradicts the typical pattern, wherein \emph{larger} means are expected to be \emph{more} variable. We conclude with future directions. These span from methodological to theoretical inquires that can be answered with the presented methodology.


2021 ◽  
Author(s):  
Daniel W. Heck ◽  
Florence Bockting

Bayes factors allow researchers to test the effects of experimental manipulations in within-subjects designs using mixed-effects models. van Doorn et al. (2021) showed that such hypothesis tests can be performed by comparing different pairs of models which vary in the specification of the fixed- and random-effect structure for the within-subjects factor. To discuss the question of which of these model comparisons is most appropriate, van Doorn et al. used a case study to compare the corresponding Bayes factors. We argue that researchers should not only focus on pairwise comparisons of two nested models but rather use the Bayes factor for performing model selection among a larger set of mixed models that represent different auxiliary assumptions. In a standard one-factorial, repeated-measures design, the comparison should include four mixed-effects models: fixed-effects H0, fixed-effects H1, random-effects H0, and random-effects H1. Thereby, the Bayes factor enables testing both the average effect of condition and the heterogeneity of effect sizes across individuals. Bayesian model averaging provides an inclusion Bayes factor which quantifies the evidence for or against the presence of an effect of condition while taking model-selection uncertainty about the heterogeneity of individual effects into account. We present a simulation study showing that model selection among a larger set of mixed models performs well in recovering the true, data-generating model.


2021 ◽  
Author(s):  
Johnny van Doorn ◽  
Frederik Aust ◽  
Julia M. Haaf ◽  
Angelika Stefan ◽  
Eric-Jan Wagenmakers

Although Bayesian mixed models are increasingly popular for data analysis in psychology and other fields, there remains considerable ambiguity on the most appropriate Bayes factor hypothesis test to quantify the degree to which the data support the presence or absence of an experimental effect. Specifically, different choices for both the null model and the alternative model are possible, and each choice constitutes a different definition of an effect resulting in a different test outcome. We outline the common approaches and focus on the impact of aggregation, the effect of measurement error, the choice of prior distribution, and the detection of interactions. For concreteness, three example scenarios showcase how seemingly innocuous choices can lead to dramatic differences in statistical evidence. We hope this work will facilitate a more explicit discussion about best practices in Bayes factor hypothesis testing in mixed models.


2014 ◽  
Author(s):  
Sarahanne Field ◽  
Eric-Jan Wagenmakers ◽  
Ben Newell ◽  
Don Van Ravenzwaaij
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document