scholarly journals Beyond Overall Effects: A Bayesian Approach to Finding Constraints Across A Collection Of Studies In Meta-Analysis

2017 ◽  
Author(s):  
Jeffrey Rouder ◽  
Julia M. Haaf ◽  
Clintin Stober ◽  
Joseph Hilgard

Most meta-analyses focus on meta-analytic means, testing whether they are significantly different from zero and how they depend on covariates. This mean is difficult to defend as a construct because the underlying distribution of studies reflects many factors such as how we choose to run experiments. We argue that the fundamental questions of meta-analysis should not be about the aggregated mean; instead, one should ask which relations are stable across all the studies. In a typical meta-analysis, there is a preferred or hypothesized direction (e.g., that violent video games increase, rather than decrease, agressive behavior). We ask whether all studies in a meta-analysis have true effects in a common direction. If so, this is an example of a stable relation across all the studies. We propose four models: (i) all studies are truly null; (ii) all studies share a single true nonzero effect; (iii) studies differ, but all true effects are in the same direction; and (iv) some study effects are truly positive while others are truly negative. We develop Bayes factor model comparison for these models and apply them to four extant meta-analyses to show their usefulness.

2019 ◽  
Vol 14 (4) ◽  
pp. 705-708 ◽  
Author(s):  
Maya B. Mathur ◽  
Tyler J. VanderWeele

Independent meta-analyses on the same topic can sometimes yield seemingly conflicting results. For example, prominent meta-analyses assessing the effects of violent video games on aggressive behavior have reached apparently different conclusions, provoking ongoing debate. We suggest that such conflicts are sometimes partly an artifact of reporting practices for meta-analyses that focus only on the pooled point estimate and its statistical significance. Considering statistics that focus on the distributions of effect sizes and that adequately characterize effect heterogeneity can sometimes indicate reasonable consensus between “warring” meta-analyses. Using novel analyses, we show that this seems to be the case in the video-game literature. Despite seemingly conflicting results for the statistical significance of the pooled estimates in different meta-analyses of video-game studies, all of the meta-analyses do in fact point to the conclusion that, in the vast majority of settings, violent video games do increase aggressive behavior but that these effects are almost always quite small.


2019 ◽  
Author(s):  
Maya B Mathur ◽  
Tyler VanderWeele

Independent meta-analyses on the same topic can sometimes yield seemingly conflicting results. For example, prominent meta-analyses assessing the effects of violent video games on aggressive behavior have reached apparently different conclusions, provoking ongoing debate. We suggest that such conflicts are sometimes partly an artifact of reporting practices for meta-analyses that focus only on the pooled point estimate and its statistical significance. Considering statistics that focus on the distributions of effect sizes and that adequately characterize effect heterogeneity can sometimes indicate reasonable consensus between “warring” meta-analyses. Using novel analyses, we show that this seems to be the case in the video-game literature. Despite seemingly conflicting results for the statistical significance of the pooled estimates in different meta-analyses of video-game studies, all of the meta-analyses do in fact point to the conclusion that, in the vast majority of settings, violent video games do increase aggressive behavior but that these effects are almost always quite small.


2020 ◽  
Author(s):  
Martin Schnuerch ◽  
Lena Nadarevic ◽  
Jeffrey Rouder

The repetition-induced truth effect refers to a phenomenon where people rate repeated statements as more likely true than novel statements. In this paper we document qualitative individual differences in the effect. While the overwhelming majority of participants display the usual positive truth effect, a minority are the opposite – they reliably discount the validity of repeated statements, what we refer to as negative truth effect. We examine 8 truth-effect data sets where individual-level data are curated. These sets are composed of 1,105 individuals performing 38,904 judgments. Through Bayes factor model comparison, we show that reliable negative truth effects occur in 5 of the 8 data sets. The negative truth effect is informative because it seems unreasonable that the mechanisms mediating the positive truth effect are the same that lead to a discounting of repeated statements' validity. Moreover, the presence of qualitative differences motivates a different type of analysis of individual differences based on ordinal (i.e., Which sign does the effect have?) rather than metric measures. To our knowledge, this paper reports the first such reliable qualitative differences in a cognitive task.


2019 ◽  
Vol 20 (2) ◽  
pp. 106-115 ◽  
Author(s):  
D. Hu ◽  
A. M. O'Connor ◽  
C. B. Winder ◽  
J. M. Sargeant ◽  
C. Wang

AbstractIn this manuscript we use realistic data to conduct a network meta-analysis using a Bayesian approach to analysis. The purpose of this manuscript is to explain, in lay terms, how to interpret the output of such an analysis. Many readers are familiar with the forest plot as an approach to presenting the results of a pairwise meta-analysis. However when presented with the results of network meta-analysis, which often does not include the forest plot, the output and results can be difficult to understand. Further, one of the advantages of Bayesian network meta-analyses is in the novel outputs such as treatment rankings and the probability distributions are more commonly presented for network meta-analysis. Our goal here is to provide a tutorial for how to read the outcome of network meta-analysis rather than how to conduct or assess the risk of bias in a network meta-analysis.


Author(s):  
Martin Schnuerch ◽  
Lena Nadarevic ◽  
Jeffrey N. Rouder

Abstract The repetition-induced truth effect refers to a phenomenon where people rate repeated statements as more likely true than novel statements. In this paper, we document qualitative individual differences in the effect. While the overwhelming majority of participants display the usual positive truth effect, a minority are the opposite—they reliably discount the validity of repeated statements, what we refer to as negative truth effect. We examine eight truth-effect data sets where individual-level data are curated. These sets are composed of 1105 individuals performing 38,904 judgments. Through Bayes factor model comparison, we show that reliable negative truth effects occur in five of the eight data sets. The negative truth effect is informative because it seems unreasonable that the mechanisms mediating the positive truth effect are the same that lead to a discounting of repeated statements’ validity. Moreover, the presence of qualitative differences motivates a different type of analysis of individual differences based on ordinal (i.e., Which sign does the effect have?) rather than metric measures. To our knowledge, this paper reports the first such reliable qualitative differences in a cognitive task.


2017 ◽  
Author(s):  
Julia M. Haaf ◽  
Jeffrey Rouder

Model comparison in Bayesian mixed models is becoming popular in psychological science. Here we develop a set of nested models that account for order restrictions across individuals in psychological tasks. An order-restricted model addresses the question 'Does Everybody', as in, 'Does everybody show the usual Stroop effect', or ‘Does everybody respond more quickly to intense noises than subtle ones.’ The crux of the modeling is the instantiation of 10s or 100s of order restrictions simultaneously, one for each participant. To our knowledge, the problem is intractable in frequentist contexts but relatively straightforward in Bayesian ones. We develop a Bayes factor model-comparison strategy using Zellner and colleagues’ default g-priors appropriate for assessing whether effects obey equality and order restrictions. We apply the methodology to seven data sets from Stroop, Simon, and Eriksen interference tasks. Not too surprisingly, we find that everybody Stroops—that is, for all people congruent colors are truly named more quickly than incongruent ones. But, perhaps surprisingly, we find these order constraints are violated for some people in the Simon task, that is, for these people spatially incongruent responses occur truly more quickly than congruent ones! Implications of the modeling and conjectures about the task-related differences are discussed.This paper was written in R-Markdown with code for data analysis integrated into the text. The Markdown script isopen and freely available at https://github.com/PerceptionAndCognitionLab/ctx-indiff. The data are also open and freely available at https://github.com/PerceptionCognitionLab/data0/tree/master/contexteffects.


2021 ◽  
Vol 12 ◽  
Author(s):  
Rune Strømme ◽  
Karine Holthe Børstad ◽  
Andrea Eftang Rø ◽  
Eilin Kristine Erevik ◽  
Dominic Sagoe ◽  
...  

Objectives: The aim of the present meta-analysis was to synthesize results from the association between problem gambling (PG) and dimensions of the five factor model of personality and to identify potential moderators (gambling diagnosis: yes/no, comorbidity: yes/no and trait assessment: four or fewer items vs. five items or more) of these associations in meta-regressions.Methods: Searches were conducted in six databases; Medline, Web of Science, PsychInfo, Google Scholar, OpenGrey, and Cochrane Library (conducted on February, 22, 2021). Included studies: (1) reported a relationship between PG and at least one of the personality traits in the five-factor model, (2) contained information of zero-order correlations or sufficient data for such calculations, and (3) were original articles published in any European language. Case-studies, qualitative studies, and reviews were excluded. All articles were independently screened by two authors. Final agreement was reached through discussion or by consulting a third author. Risk of bias of the included studies was assessed by the Newcastle-Ottawa Scale. Data were synthesized using a random effects model.Results: In total 28 studies, comprising 20,587 participants, were included. The correlations between PG and the traits were as follows: Neuroticism: 0.273 (95% CI = 0.182, 0.358), conscientiousness −0.296 (95% CI = −0.400, −0.185), agreeableness −0.163 (95% CI = −0.223, −0.101), openness −0.219 (95% CI = −0.308, −0.127), and extroversion −0.083 (95% CI = −0.120, −0.046). For all meta-analyses the between study heterogeneity was significant. Presence of gambling diagnosis was the only moderator that significantly explained between-study variance showing a more negative correlation to extroversion when participants had a gambling diagnosis compared to when this was not the case.Discussion: The results indicated some publication bias. Correcting for this by a trim-and-fill procedure showed however that the findings were consistent. Clinicians and researchers should be aware of the associations between personality traits and PG. Previous studies have for example showed neuroticism to be related to treatment relapse, low scores on conscientiousness to predict treatment drop-out and agreeableness to reduce risk of treatment drop-out.Systematic Review Registration: PROSPERO (CRD42021237225).


2019 ◽  
Vol 27 (3) ◽  
pp. 453 ◽  
Author(s):  
Rong SHAO ◽  
Zhaojun TENG ◽  
Yanling LIU

2021 ◽  
Author(s):  
Maximilian Linde ◽  
Don van Ravenzwaaij

Nested data structures, in which conditions include multiple trials, are often analyzed using repeated-measures analysis of variance or mixed effects models. Typically, researchers are interested in determining whether there is an effect of the experimental manipulation. Unfortunately, these kinds of analyses have different appropriate specifications for the null and alternative models, and a discussion on which is to be preferred and when is sorely lacking. van Doorn et al. (2021) performed three types of Bayes factor model comparisons on a simulated data set in order to examine which model comparison is most suitable for quantifying evidence for or against the presence of an effect of the experimental manipulation. Here we extend their results by simulating multiple data sets for various scenarios and by using different prior specifications. We demonstrate how three different Bayes factor model comparison types behave under changes in different parameters, and we make concrete recommendations on which model comparison is most appropriate for different scenarios.


Sign in / Sign up

Export Citation Format

Share Document