The file drawer effect – a long-lasting issue in science

2021 ◽  
Author(s):  
Bastien Lemaire ◽  
◽  
Raphaël Lenoble ◽  
Mirko Zanon ◽  
Thibaud Jacquel ◽  
...  

Most of the scientific outputs produced by researchers are inaccessible since they are not published in scientific journals: they remain in the researchers' drawers, forming what we call the Dark Science. This is a long-standing issue in research, creating a misleading view of the scientific facts. Contrary to the current literature overfed with positive findings, the Dark Science is nurtured with null findings, replications, flawed experimental designs and other research outputs. Publishers, researchers, institutions and funders all play an important role in the accumulation of those unpublished works, but it is only once we understand the reasons and the benefits of publishing all the scientific findings that we can collectively act to solve the Dark Science problem. In this article, we discuss the causes and consequences of the Dark Science expansion, arguing that science and scientists would benefit from getting all their findings to the light of publication.

Author(s):  
Gary Smith ◽  
Jay Cordes

Researchers seeking fame and funding may be tempted to go on fishing expeditions (p-hacking) or to torture the data to find novel, provocative results that will be picked up by the popular media. Provocative findings are provocative because they are novel and unexpected, and they are often novel and unexpected because they are simply not true. The publication effect (or the file drawer effect) keeps the failures hidden and have created a replication crisis. Research that gets reported in the popular media is often wrong—which fools people and undermines the credibility of scientific research.


2021 ◽  
Author(s):  
Matt Tincani ◽  
Jason C Travers

Questionable research practices (QRPs) are a variety of research choices that introduce bias into the body of scientific literature. Researchers have documented widespread presence of QRPs across disciplines and promoted practices aimed at preventing them. More recently, Single-Case Experimental Design (SCED) researchers have explored how QRPs could manifest in SCED research. In the chapter, we describe QRPs in participant selection, independent variable selection, procedural fidelity documentation, graphical depictions of behavior, and effect size measures and statistics. We also discuss QRPs in relation to the file drawer effect, publication bias, and meta-analyses of SCED research. We provide recommendations for researchers and the research community to promote practices for preventing QRPs in SCED.


PEDIATRICS ◽  
1996 ◽  
Vol 97 (1) ◽  
pp. 70-70

Statistics can tell us when published numbers truly point to the probability of a negative result, even though we, in our hopes, have mistakenly conferred a positive interpretation. But statistics cannot rescue us . . . when we publish positive results and consign our probable negativities to nonscrutiny in our file drawers.


2015 ◽  
Vol 112 (36) ◽  
pp. 11335-11340 ◽  
Author(s):  
Anthony Bowen ◽  
Arturo Casadevall

Society makes substantial investments in biomedical research, searching for ways to better human health. The product of this research is principally information published in scientific journals. Continued investment in science relies on society’s confidence in the accuracy, honesty, and utility of research results. A recent focus on productivity has dominated the competitive evaluation of scientists, creating incentives to maximize publication numbers, citation counts, and publications in high-impact journals. Some studies have also suggested a decreasing quality in the published literature. The efficiency of society’s investments in biomedical research, in terms of improved health outcomes, has not been studied. We show that biomedical research outcomes over the last five decades, as estimated by both life expectancy and New Molecular Entities approved by the Food and Drug Administration, have remained relatively constant despite rising resource inputs and scientific knowledge. Research investments by the National Institutes of Health over this time correlate with publication and author numbers but not with the numerical development of novel therapeutics. We consider several possibilities for the growing input-outcome disparity including the prior elimination of easier research questions, increasing specialization, overreliance on reductionism, a disproportionate emphasis on scientific outputs, and other negative pressures on the scientific enterprise. Monitoring the efficiency of research investments in producing positive societal outcomes may be a useful mechanism for weighing the efficacy of reforms to the scientific enterprise. Understanding the causes of the increasing input-outcome disparity in biomedical research may improve society’s confidence in science and provide support for growing future research investments.


2017 ◽  
Author(s):  
Freya Acar ◽  
Ruth Seurinck ◽  
Simon B. Eickhoff ◽  
Beatrijs Moerkerke

AbstractThe importance of integrating research findings is incontrovertible and coordinate based meta-analyses have become a popular approach to combine results of fMRI studies when only peaks of activation are reported. Similar to classical meta-analyses, coordinate based meta-analyses may be subject to different forms of publication bias which impacts results and possibly invalidates findings. We develop a tool that assesses the robustness to potential publication bias on cluster level. We investigate the possible influence of the file-drawer effect, where studies that do not report certain results fail to get published, by determining the number of noise studies that can be added to an existing fMRI meta-analysis before the results are no longer statistically significant. In this paper we illustrate this tool through an example and test the effect of several parameters through extensive simulations. We provide an algorithm for which code is freely available to generate noise studies and enables users to determine the robustness of meta-analytical results.


2017 ◽  
Vol 61 (6) ◽  
pp. 516 ◽  
Author(s):  
PriscillaJoys Nagarajan ◽  
BharathKumar Garla ◽  
M Taranath ◽  
I Nagarajan

2021 ◽  
Author(s):  
Eli Talbert

Using COVID Pulse Data collected by the U.S. Census Bureau I establish that there are weak to nocorrelational relationships between a household reporting a child attending virtual or in-person school andvarious outcomes including expectations of loss of employment, child hunger, anxiety. Due to the coarsenessof the data, it is unclear if this is an artifact of the data or a reflection of the lack of underlying causalrelationships between mode of schooling and the outcomes. Therefore, these results should not be used tomake policy decisions or draw substantive conclusions about the decision to reopen schools and are reportedonly to avoid the file-drawer effect.


Author(s):  
Lawrence A. Zeidman

Neuropsychiatric patients were also subjected to various experiments during the euthanasia programs. The unethical experiments ranged from photography using restraints, to inducing hypoxic or hypercarbic states in epileptics to monitor electroencephalographic changes, to exposing children to hypoxia to induce seizures, to inoculating spinal fluid with monkey fluids to transmit a potential multiple sclerosis-causing viral agent, and tuberculosis experiments. These “expendable” patients were loaned from local mental institutions, and some died during the experiments. Results were published in scientific journals and continued to be cited as legitimate experiments long after the war. The individual scientists did not face any sanctions for the unethical experimental designs. Justifications given were that the patients were demented and couldn’t experience pain, or that the experimenter was in the vacuum chamber with the children. But the experiments violated 1931 Weimar guidelines and the Hippocratic Oath, even if formal ethical restrictions were not standardized at the time.


1987 ◽  
Vol 11 (2) ◽  
pp. 233-242 ◽  
Author(s):  
Barbara Sommer

The file drawer problem refers to a publication bias for positive results, leading to studies which support the null hypothesis being relegated to the file drawer. The assumption is that researchers are unable to publish studies with nonsignificant findings. A survey of investigators studying the menstrual cycle showed this assumption to be unwarranted. Much of the research did not lend itself to a hypothesis-testing model. A more important contribution to the likelihood of publication was research productivity, and researchers whose first study was published were more likely to have continued their work.


Sign in / Sign up

Export Citation Format

Share Document