scholarly journals A defense of meta-analytic thinking in behavioral science

2018 ◽  
Author(s):  
Caleb Z. Marshall

We discuss how questionable research practices in behavioral science (such as p-hacking) effect meta-analyses. Moreover, we argue that abandoning meta-analytic techniques is an overreaction to concerns of type I errors.

2019 ◽  
Vol 227 (4) ◽  
pp. 261-279 ◽  
Author(s):  
Frank Renkewitz ◽  
Melanie Keiner

Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect such bias in meta-analytic results. We present an evaluation of the performance of six of these tools. To assess the Type I error rate and the statistical power of these methods, we simulated a large variety of literatures that differed with regard to true effect size, heterogeneity, number of available primary studies, and sample sizes of these primary studies; furthermore, simulated studies were subjected to different degrees of publication bias. Our results show that across all simulated conditions, no method consistently outperformed the others. Additionally, all methods performed poorly when true effect sizes were heterogeneous or primary studies had a small chance of being published, irrespective of their results. This suggests that in many actual meta-analyses in psychology, bias will remain undiscovered no matter which detection method is used.


2021 ◽  
Author(s):  
Sean Grant ◽  
Evan Mayo-Wilson ◽  
Lauren Supplee

The credibility of Prevention Services Clearinghouse designations of programs and services as “promising,” “supported,” and “well supported” are threatened by the prevalence of questionable research practices (e.g., selective non-reporting of results) in the bodies of evidence that the Clearinghouse reviews. Internationally accepted standards for reporting and interpreting the results of systematic reviews of evidence—including the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and Grading of Recommendations Assessment, Development and Evaluation (GRADE)—recommend that reviews take steps to mitigate bias associated with these questionable research practices. Moreover, Department of Health and Human Services (HHS) policies require that contractors and grantees engage in transparent, open, and reproducible research. We propose that the Clearinghouse adopt standards to mitigate the effects of these questionable research practices, which would be consistent with international guidelines and with complementary HHS policies and procedures.


2021 ◽  
Author(s):  
Matt Tincani ◽  
Jason C Travers

Questionable research practices (QRPs) are a variety of research choices that introduce bias into the body of scientific literature. Researchers have documented widespread presence of QRPs across disciplines and promoted practices aimed at preventing them. More recently, Single-Case Experimental Design (SCED) researchers have explored how QRPs could manifest in SCED research. In the chapter, we describe QRPs in participant selection, independent variable selection, procedural fidelity documentation, graphical depictions of behavior, and effect size measures and statistics. We also discuss QRPs in relation to the file drawer effect, publication bias, and meta-analyses of SCED research. We provide recommendations for researchers and the research community to promote practices for preventing QRPs in SCED.


2009 ◽  
Vol 62 (8) ◽  
pp. 825-830.e10 ◽  
Author(s):  
George F. Borm ◽  
A. Rogier T. Donders

2021 ◽  
Author(s):  
Sean Grant ◽  
Evan Mayo-Wilson ◽  
Lauren Supplee

Questionable research practices threaten the credibility of HomVEE designations for “evidence-based early childhood home visiting service delivery models”. Internationally accepted standards for reporting and interpreting the results of reviews—including the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and Grading of Recommendations Assessment, Development and Evaluation (GRADE)—recommend that reviews take steps to mitigate bias associated with selective non-reporting of results and other questionable research practices. Moreover, Department of Health and Human Services (HHS) policies require that contractors and grantees engage in transparent and open sciences practices. We propose that HomVEE adopt standards to mitigate the effects of questionable research practices, which would be consistent with international guidelines and with complementary HHS policies and procedures.


F1000Research ◽  
2019 ◽  
Vol 8 ◽  
pp. 962 ◽  
Author(s):  
Judith ter Schure ◽  
Peter Grünwald

Studies accumulate over time and meta-analyses are mainly retrospective. These two characteristics introduce dependencies between the analysis time, at which a series of studies is up for meta-analysis, and results within the series. Dependencies introduce bias — Accumulation Bias — and invalidate the sampling distribution assumed for p-value tests, thus inflating type-I errors. But dependencies are also inevitable, since for science to accumulate efficiently, new research needs to be informed by past results. Here, we investigate various ways in which time influences error control in meta-analysis testing. We introduce an Accumulation Bias Framework that allows us to model a wide variety of practically occurring dependencies including study series accumulation, meta-analysis timing, and approaches to multiple testing in living systematic reviews. The strength of this framework is that it shows how all dependencies affect p-value-based tests in a similar manner. This leads to two main conclusions. First, Accumulation Bias is inevitable, and even if it can be approximated and accounted for, no valid p-value tests can be constructed. Second, tests based on likelihood ratios withstand Accumulation Bias: they provide bounds on error probabilities that remain valid despite the bias. We leave the reader with a choice between two proposals to consider time in error control: either treat individual (primary) studies and meta-analyses as two separate worlds — each with their own timing — or integrate individual studies in the meta-analysis world. Taking up likelihood ratios in either approach allows for valid tests that relate well to the accumulating nature of scientific knowledge. Likelihood ratios can be interpreted as betting profits, earned in previous studies and invested in new ones, while the meta-analyst is allowed to cash out at any time and advice against future studies.


Methodology ◽  
2015 ◽  
Vol 11 (3) ◽  
pp. 110-115 ◽  
Author(s):  
Rand R. Wilcox ◽  
Jinxia Ma

Abstract. The paper compares methods that allow both within group and between group heteroscedasticity when performing all pairwise comparisons of the least squares lines associated with J independent groups. The methods are based on simple extension of results derived by Johansen (1980) and Welch (1938) in conjunction with the HC3 and HC4 estimators. The probability of one or more Type I errors is controlled using the improvement on the Bonferroni method derived by Hochberg (1988) . Results are illustrated using data from the Well Elderly 2 study, which motivated this paper.


Sign in / Sign up

Export Citation Format

Share Document