scholarly journals A Historical Review of Publication Bias

2020 ◽  
Author(s):  
Arielle Marks-Anglin ◽  
Yong Chen

Publication bias is a well-known threat to the validity of meta-analyses and, more broadly, the reproducibility of scientific findings. When policies and recommendations are predicated on an incomplete evidence-base, it undermines the goals of evidence-based decision-making. Great strides have been made in the last fifty years to understand and address this problem, including calls for mandatory trial registration and the development of statistical methods to detect and correct for publication bias. We offer an historical account of seminal contributions by the evidence synthesis community, with an emphasis on the parallel development of graph-based and selection model approaches. We also draw attention to current innovations and opportunities for future methodological work.

2021 ◽  
Author(s):  
Trina Rytwinski ◽  
Steven J Cooke ◽  
Jessica J Taylor ◽  
Dominique Roche ◽  
Paul A Smith ◽  
...  

Evidence-based decision-making often depends on some form of a synthesis of previous findings. There is growing recognition that systematic reviews, which incorporate a critical appraisal of evidence, are the gold standard synthesis method in applied environmental science. Yet, on a daily basis, environmental practitioners and decision-makers are forced to act even if the evidence base to guide them is insufficient. For example, it is not uncommon for a systematic review to conclude that an evidence base is large but of low reliability. There are also instances where the evidence base is sparse (e.g., one or two empirical studies on a particular taxa or intervention), and no additional evidence arises from a systematic review. In some cases, the systematic review highlights considerable variability in the outcomes of primary studies, which in turn generates ambiguity (e.g., potentially context specific). When the environmental evidence base is ambiguous, biased, or lacking of new information, practitioners must still make management decisions. Waiting for new, higher validity research to be conducted is often unrealistic as many decisions are urgent. Here, we identify the circumstances that can lead to ambiguity, bias, and the absence of additional evidence arising from systematic reviews and provide practical guidance to resolve or handle these scenarios when encountered. Our perspective attempts to highlight that, with evidence synthesis, there may be a need to balance the spirit of evidence-based decision-making and the practical reality that management and conservation decisions and action is often time sensitive.


Author(s):  
Derick W. Brinkerhoff ◽  
Sarah Frazer ◽  
Lisa McGregor-Mirghani

Adaptive programming and management principles focused on learning, experimentation, and evidence-based decision making are gaining traction with donor agencies and implementing partners in international development. Adaptation calls for using learning to inform adjustments during project implementation. This requires information gathering methods that promote reflection, learning, and adaption, beyond reporting on pre-specified data. A focus on adaptation changes traditional thinking about program cycle. It both erases the boundaries between design, implementation, and evaluation and reframes thinking to consider the complexity of development problems and nonlinear change pathways.Supportive management structures and processes are crucial for fostering adaptive management. Implementers and donors are experimenting with how procurement, contracting, work planning, and reporting can be modified to foster adaptive programming. Well-designed monitoring, evaluation, and learning systems can go beyond meeting accountability and reporting requirements to produce data and learning for evidence-based decision making and adaptive management. It is important to continue experimenting and learning to integrate adaptive programming and management into the operational policies and practices of donor agencies, country partners, and implementers. We need to devote ongoing effort to build the evidence base for the contributions of adaptive management to achieving international development results.


BMC Biology ◽  
2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Gorm E. Shackelford ◽  
Philip A. Martin ◽  
Amelia S. C. Hood ◽  
Alec P. Christie ◽  
Elena Kulinskaya ◽  
...  

Abstract Background Meta-analysis is often used to make generalisations across all available evidence at the global scale. But how can these global generalisations be used for evidence-based decision making at the local scale, if the global evidence is not perceived to be relevant to local decisions? We show how an interactive method of meta-analysis—dynamic meta-analysis—can be used to assess the local relevance of global evidence. Results We developed Metadataset (www.metadataset.com) as a proof-of-concept for dynamic meta-analysis. Using Metadataset, we show how evidence can be filtered and weighted, and results can be recalculated, using dynamic methods of subgroup analysis, meta-regression, and recalibration. With an example from agroecology, we show how dynamic meta-analysis could lead to different conclusions for different subsets of the global evidence. Dynamic meta-analysis could also lead to a rebalancing of power and responsibility in evidence synthesis, since evidence users would be able to make decisions that are typically made by systematic reviewers—decisions about which studies to include (e.g. critical appraisal) and how to handle missing or poorly reported data (e.g. sensitivity analysis). Conclusions In this study, we show how dynamic meta-analysis can meet an important challenge in evidence-based decision making—the challenge of using global evidence for local decisions. We suggest that dynamic meta-analysis can be used for subject-wide evidence synthesis in several scientific disciplines, including agroecology and conservation biology. Future studies should develop standardised classification systems for the metadata that are used to filter and weight the evidence. Future studies should also develop standardised software packages, so that researchers can efficiently publish dynamic versions of their meta-analyses and keep them up-to-date as living systematic reviews. Metadataset is a proof-of-concept for this type of software, and it is open source. Future studies should improve the user experience, scale the software architecture, agree on standards for data and metadata storage and processing, and develop protocols for responsible evidence use.


Author(s):  
Andrew W. Brown ◽  
Tapan S. Mehta ◽  
David B. Allison

When we rely on science to inform decisions about matters such as the environment, teaching strategies, economics, government, and medicine, evidence-based decision-making can only be as reliable as the totality of the science itself. We must avoid distortions of the scientific literature such as publication bias, which is an expected systematic difference between estimates of associations, causal effects, or other quantities of interest compared to the actual values of those quantities, caused by differences between research that is published and the totality of research conducted. Publication bias occurs when the probability of publishing a result of a study is influenced by the result obtained. It appears to be common and can produce misleading conclusions about interventions, make effects appear greater than they are, lead to irreproducible research, and ultimately undermine the credibility of science in general. Methods to detect publication bias and steps to reduce it are discussed.


2009 ◽  
Vol 6 (1) ◽  
pp. 78-81 ◽  
Author(s):  
Gavin Stewart

This overview examines research synthesis in applied ecology and conservation. Vote counting and pooling unweighted averages are widespread despite the superiority of syntheses based on weighted combination of effects. Such analyses allow exploration of methodological uncertainty in addition to consistency of effects across species, space and time, but exploring heterogeneity remains controversial. Meta-analyses are required to generalize in ecology, and to inform evidence-based decision-making, but the more sophisticated statistical techniques and registers of research used in other disciplines must be employed in ecology to fully realize their benefits.


2013 ◽  
Vol 16 (7) ◽  
pp. A386-A387
Author(s):  
K. Tolley ◽  
A. Miners ◽  
J. Brazier ◽  
L.M. Pericleous ◽  
T. Sharma ◽  
...  

Author(s):  
John Hunsley ◽  
Eric J. Mash

Evidence-based assessment relies on research and theory to inform the selection of constructs to be assessed for a specific assessment purpose, the methods and measures to be used in the assessment, and the manner in which the assessment process unfolds. An evidence-based approach to clinical assessment necessitates the recognition that, even when evidence-based instruments are used, the assessment process is a decision-making task in which hypotheses must be iteratively formulated and tested. In this chapter, we review (a) the progress that has been made in developing an evidence-based approach to clinical assessment in the past decade and (b) the many challenges that lie ahead if clinical assessment is to be truly evidence-based.


2014 ◽  
Vol 67 (5) ◽  
pp. 790-794 ◽  
Author(s):  
Iván Arribas ◽  
Irene Comeig ◽  
Amparo Urbano ◽  
José Vila

Sign in / Sign up

Export Citation Format

Share Document