Supplemental Material for A Parsimonious Weight Function for Modeling Publication Bias

2017 ◽  
Vol 22 (1) ◽  
pp. 28-41 ◽  
Author(s):  
Martyna Citkowicz ◽  
Jack L. Vevea

1997 ◽  
Vol 22 (2) ◽  
pp. 141-154 ◽  
Author(s):  
Richard J. Cleary ◽  
George Casella

There is a widespread concern that published results in most disciplines are highly biased in favor of statistically significant outcomes. We propose a model to explicitly account for publication bias using a weight function that describes the probability of publication for a particular study in terms of a selection parameter. A Bayesian analysis of this model using flat priors on both the parameter of interest and the selection parameter is carried out using Gibbs sampling to calculate the posterior distributions of interest. The model is studied in detail for the case of a single observed result and then extended to provide a method for interpreting meta-analyses. We consider models in which the probability of publication for a study might depend on other characteristics of the study—in particular, the size of the study. Finally, we apply our model to a published meta-analysis which examined the effect of coaching on scores on the SAT.


2019 ◽  
Vol 227 (4) ◽  
pp. 261-279 ◽  
Author(s):  
Frank Renkewitz ◽  
Melanie Keiner

Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect such bias in meta-analytic results. We present an evaluation of the performance of six of these tools. To assess the Type I error rate and the statistical power of these methods, we simulated a large variety of literatures that differed with regard to true effect size, heterogeneity, number of available primary studies, and sample sizes of these primary studies; furthermore, simulated studies were subjected to different degrees of publication bias. Our results show that across all simulated conditions, no method consistently outperformed the others. Additionally, all methods performed poorly when true effect sizes were heterogeneous or primary studies had a small chance of being published, irrespective of their results. This suggests that in many actual meta-analyses in psychology, bias will remain undiscovered no matter which detection method is used.


2002 ◽  
Author(s):  
Shyhnan Liou ◽  
Chung-Ping Cheng
Keyword(s):  

2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews & Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


2017 ◽  
Author(s):  
Nicholas Alvaro Coles ◽  
Jeff T. Larsen ◽  
Heather Lench

The facial feedback hypothesis suggests that an individual’s experience of emotion is influenced by feedback from their facial movements. To evaluate the cumulative evidence for this hypothesis, we conducted a meta-analysis on 286 effect sizes derived from 138 studies that manipulated facial feedback and collected emotion self-reports. Using random effects meta-regression with robust variance estimates, we found that the overall effect of facial feedback was significant, but small. Results also indicated that feedback effects are stronger in some circumstances than others. We examined 12 potential moderators, and three were associated with differences in effect sizes. 1. Type of emotional outcome: Facial feedback influenced emotional experience (e.g., reported amusement) and, to a greater degree, affective judgments of a stimulus (e.g., the objective funniness of a cartoon). Three publication bias detection methods did not reveal evidence of publication bias in studies examining the effects of facial feedback on emotional experience, but all three methods revealed evidence of publication bias in studies examining affective judgments. 2. Presence of emotional stimuli: Facial feedback effects on emotional experience were larger in the absence of emotionally evocative stimuli (e.g., cartoons). 3. Type of stimuli: When participants were presented with emotionally evocative stimuli, facial feedback effects were larger in the presence of some types of stimuli (e.g., emotional sentences) than others (e.g., pictures). The available evidence supports the facial feedback hypothesis’ central claim that facial feedback influences emotional experience, although these effects tend to be small and heterogeneous.


2020 ◽  
Vol 132 (2) ◽  
pp. 662-670
Author(s):  
Minh-Son To ◽  
Alistair Jukes

OBJECTIVEThe objective of this study was to evaluate the trends in reporting of p values in the neurosurgical literature from 1990 through 2017.METHODSAll abstracts from the Journal of Neurology, Neurosurgery, and Psychiatry (JNNP), Journal of Neurosurgery (JNS) collection (including Journal of Neurosurgery: Spine and Journal of Neurosurgery: Pediatrics), Neurosurgery (NS), and Journal of Neurotrauma (JNT) available on PubMed from 1990 through 2017 were retrieved. Automated text mining was performed to extract p values from relevant abstracts. Extracted p values were analyzed for temporal trends and characteristics.RESULTSThe search yielded 47,889 relevant abstracts. A total of 34,324 p values were detected in 11,171 abstracts. Since 1990 there has been a steady, proportionate increase in the number of abstracts containing p values. There were average absolute year-on-year increases of 1.2% (95% CI 1.1%–1.3%; p < 0.001), 0.93% (95% CI 0.75%–1.1%; p < 0.001), 0.70% (95% CI 0.57%–0.83%; p < 0.001), and 0.35% (95% CI 0.095%–0.60%; p = 0.0091) of abstracts reporting p values in JNNP, JNS, NS, and JNT, respectively. There have also been average year-on-year increases of 0.045 (95% CI 0.031–0.059; p < 0.001), 0.052 (95% CI 0.037–0.066; p < 0.001), 0.042 (95% CI 0.030–0.054; p < 0.001), and 0.041 (95% CI 0.026–0.056; p < 0.001) p values reported per abstract for these respective journals. The distribution of p values showed a positive skew and strong clustering of values at rounded decimals (i.e., 0.01, 0.02, etc.). Between 83.2% and 89.8% of all reported p values were at or below the “significance” threshold of 0.05 (i.e., p ≤ 0.05).CONCLUSIONSTrends in reporting of p values and the distribution of p values suggest publication bias remains in the neurosurgical literature.


Sign in / Sign up

Export Citation Format

Share Document