THE FILE DRAWER EFFECT

PEDIATRICS ◽  
1996 ◽  
Vol 97 (1) ◽  
pp. 70-70

Statistics can tell us when published numbers truly point to the probability of a negative result, even though we, in our hopes, have mistakenly conferred a positive interpretation. But statistics cannot rescue us . . . when we publish positive results and consign our probable negativities to nonscrutiny in our file drawers.

1987 ◽  
Vol 11 (2) ◽  
pp. 233-242 ◽  
Author(s):  
Barbara Sommer

The file drawer problem refers to a publication bias for positive results, leading to studies which support the null hypothesis being relegated to the file drawer. The assumption is that researchers are unable to publish studies with nonsignificant findings. A survey of investigators studying the menstrual cycle showed this assumption to be unwarranted. Much of the research did not lend itself to a hypothesis-testing model. A more important contribution to the likelihood of publication was research productivity, and researchers whose first study was published were more likely to have continued their work.


Author(s):  
Gary Smith ◽  
Jay Cordes

Researchers seeking fame and funding may be tempted to go on fishing expeditions (p-hacking) or to torture the data to find novel, provocative results that will be picked up by the popular media. Provocative findings are provocative because they are novel and unexpected, and they are often novel and unexpected because they are simply not true. The publication effect (or the file drawer effect) keeps the failures hidden and have created a replication crisis. Research that gets reported in the popular media is often wrong—which fools people and undermines the credibility of scientific research.


2020 ◽  
Author(s):  
František Bartoš ◽  
Ulrich Schimmack

This article introduces z-curve.2.0 as a method that estimates the expected replication rate (ERR) and the expected discovery rate (EDR) based on the test-statistics of studies selected for significance. Z-curve.2.0 extends the work by Brunner and Schimmack (2019) in several ways. First, we show that a new estimation method using expectation-maximization outperforms the kernel-density approach of z-curve.1.0. Second, we examine the coverage of bootstrapped confidence intervals to provide information about the uncertainty in z-curve estimates. Third, we extended z-curve to estimate the number of all studies that were conducted, including studies with non-significant results that may not have been reported, solely on the basis of significant results. This allows us to estimate the EDR; that is, the percentage of significant results that were obtained in all studies. EDR can be used to assess the size of the file-drawer, estimate the maximum number of false positive results, and may provide a better estimate of the success rate in actual replication studies than the ERR because exact replications are impossible.


2021 ◽  
Author(s):  
Bastien Lemaire ◽  
◽  
Raphaël Lenoble ◽  
Mirko Zanon ◽  
Thibaud Jacquel ◽  
...  

Most of the scientific outputs produced by researchers are inaccessible since they are not published in scientific journals: they remain in the researchers' drawers, forming what we call the Dark Science. This is a long-standing issue in research, creating a misleading view of the scientific facts. Contrary to the current literature overfed with positive findings, the Dark Science is nurtured with null findings, replications, flawed experimental designs and other research outputs. Publishers, researchers, institutions and funders all play an important role in the accumulation of those unpublished works, but it is only once we understand the reasons and the benefits of publishing all the scientific findings that we can collectively act to solve the Dark Science problem. In this article, we discuss the causes and consequences of the Dark Science expansion, arguing that science and scientists would benefit from getting all their findings to the light of publication.


2021 ◽  
Author(s):  
Matt Tincani ◽  
Jason C Travers

Questionable research practices (QRPs) are a variety of research choices that introduce bias into the body of scientific literature. Researchers have documented widespread presence of QRPs across disciplines and promoted practices aimed at preventing them. More recently, Single-Case Experimental Design (SCED) researchers have explored how QRPs could manifest in SCED research. In the chapter, we describe QRPs in participant selection, independent variable selection, procedural fidelity documentation, graphical depictions of behavior, and effect size measures and statistics. We also discuss QRPs in relation to the file drawer effect, publication bias, and meta-analyses of SCED research. We provide recommendations for researchers and the research community to promote practices for preventing QRPs in SCED.


2017 ◽  
Author(s):  
Freya Acar ◽  
Ruth Seurinck ◽  
Simon B. Eickhoff ◽  
Beatrijs Moerkerke

AbstractThe importance of integrating research findings is incontrovertible and coordinate based meta-analyses have become a popular approach to combine results of fMRI studies when only peaks of activation are reported. Similar to classical meta-analyses, coordinate based meta-analyses may be subject to different forms of publication bias which impacts results and possibly invalidates findings. We develop a tool that assesses the robustness to potential publication bias on cluster level. We investigate the possible influence of the file-drawer effect, where studies that do not report certain results fail to get published, by determining the number of noise studies that can be added to an existing fMRI meta-analysis before the results are no longer statistically significant. In this paper we illustrate this tool through an example and test the effect of several parameters through extensive simulations. We provide an algorithm for which code is freely available to generate noise studies and enables users to determine the robustness of meta-analytical results.


2017 ◽  
Vol 61 (6) ◽  
pp. 516 ◽  
Author(s):  
PriscillaJoys Nagarajan ◽  
BharathKumar Garla ◽  
M Taranath ◽  
I Nagarajan

2021 ◽  
Author(s):  
Eli Talbert

Using COVID Pulse Data collected by the U.S. Census Bureau I establish that there are weak to nocorrelational relationships between a household reporting a child attending virtual or in-person school andvarious outcomes including expectations of loss of employment, child hunger, anxiety. Due to the coarsenessof the data, it is unclear if this is an artifact of the data or a reflection of the lack of underlying causalrelationships between mode of schooling and the outcomes. Therefore, these results should not be used tomake policy decisions or draw substantive conclusions about the decision to reopen schools and are reportedonly to avoid the file-drawer effect.


2017 ◽  
Author(s):  
Jeffrey Robert Spies

There currently exists a gap between scientific values and scientific practices. This gap is strongly tied to the current incentive structure that rewards publication over accurate science. Other problems associated with this gap include reconstructing exploratory narratives as confirmatory, the file drawer effect, an overall lack of archiving and sharing, and a singular contribution model - publication - through which credit is obtained. A solution to these problems is increased disclosure, transparency, and openness. The Open Science Framework (http://openscienceframework.org) is an infrastructure for managing the scientific workflow across the entirety of the scientific process, thus allowing the facilitation and incentivization of openness in a comprehensive manner. The current version of the OSF includes tools for documentation, collaboration, sharing, archiving, registration, and exploration.


Sign in / Sign up

Export Citation Format

Share Document