scholarly journals Acting in the Face of Evidentiary Ambiguity, Bias, and Absence Arising from Systematic Reviews in Applied Environmental Science

2021 ◽  
Author(s):  
Trina Rytwinski ◽  
Steven J Cooke ◽  
Jessica J Taylor ◽  
Dominique Roche ◽  
Paul A Smith ◽  
...  

Evidence-based decision-making often depends on some form of a synthesis of previous findings. There is growing recognition that systematic reviews, which incorporate a critical appraisal of evidence, are the gold standard synthesis method in applied environmental science. Yet, on a daily basis, environmental practitioners and decision-makers are forced to act even if the evidence base to guide them is insufficient. For example, it is not uncommon for a systematic review to conclude that an evidence base is large but of low reliability. There are also instances where the evidence base is sparse (e.g., one or two empirical studies on a particular taxa or intervention), and no additional evidence arises from a systematic review. In some cases, the systematic review highlights considerable variability in the outcomes of primary studies, which in turn generates ambiguity (e.g., potentially context specific). When the environmental evidence base is ambiguous, biased, or lacking of new information, practitioners must still make management decisions. Waiting for new, higher validity research to be conducted is often unrealistic as many decisions are urgent. Here, we identify the circumstances that can lead to ambiguity, bias, and the absence of additional evidence arising from systematic reviews and provide practical guidance to resolve or handle these scenarios when encountered. Our perspective attempts to highlight that, with evidence synthesis, there may be a need to balance the spirit of evidence-based decision-making and the practical reality that management and conservation decisions and action is often time sensitive.

2020 ◽  
Author(s):  
Arielle Marks-Anglin ◽  
Yong Chen

Publication bias is a well-known threat to the validity of meta-analyses and, more broadly, the reproducibility of scientific findings. When policies and recommendations are predicated on an incomplete evidence-base, it undermines the goals of evidence-based decision-making. Great strides have been made in the last fifty years to understand and address this problem, including calls for mandatory trial registration and the development of statistical methods to detect and correct for publication bias. We offer an historical account of seminal contributions by the evidence synthesis community, with an emphasis on the parallel development of graph-based and selection model approaches. We also draw attention to current innovations and opportunities for future methodological work.


2010 ◽  
Vol 33 (1) ◽  
pp. 9-23 ◽  
Author(s):  
Janet Harris ◽  
Karen Kearley ◽  
Carl Heneghan ◽  
Emma Meats ◽  
Nia Roberts ◽  
...  

2022 ◽  
Vol 11 (1) ◽  
Author(s):  
Yuelun Zhang ◽  
Siyu Liang ◽  
Yunying Feng ◽  
Qing Wang ◽  
Feng Sun ◽  
...  

Abstract Background Systematic review is an indispensable tool for optimal evidence collection and evaluation in evidence-based medicine. However, the explosive increase of the original literatures makes it difficult to accomplish critical appraisal and regular update. Artificial intelligence (AI) algorithms have been applied to automate the literature screening procedure in medical systematic reviews. In these studies, different algorithms were used and results with great variance were reported. It is therefore imperative to systematically review and analyse the developed automatic methods for literature screening and their effectiveness reported in current studies. Methods An electronic search will be conducted using PubMed, Embase, ACM Digital Library, and IEEE Xplore Digital Library databases, as well as literatures found through supplementary search in Google scholar, on automatic methods for literature screening in systematic reviews. Two reviewers will independently conduct the primary screening of the articles and data extraction, in which nonconformities will be solved by discussion with a methodologist. Data will be extracted from eligible studies, including the basic characteristics of study, the information of training set and validation set, and the function and performance of AI algorithms, and summarised in a table. The risk of bias and applicability of the eligible studies will be assessed by the two reviewers independently based on Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). Quantitative analyses, if appropriate, will also be performed. Discussion Automating systematic review process is of great help in reducing workload in evidence-based practice. Results from this systematic review will provide essential summary of the current development of AI algorithms for automatic literature screening in medical evidence synthesis and help to inspire further studies in this field. Systematic review registration PROSPERO CRD42020170815 (28 April 2020).


Author(s):  
Daniela Filipa Batista Cardoso ◽  
Diana Gabriela Simões Marques Santos ◽  
Joana Filipa Cunha Rodrigues ◽  
Nichole Bento ◽  
Rogério Manuel Clemente Rodrigues ◽  
...  

ABSTRACT Objective: To report the experience of the Portugal Centre For Evidence Based Practice (PCEBP): a JBI Centre of Excellence in the training of health professionals, researchers, and professors in the Comprehensive Systematic Review Training Program, a course on Evidence Synthesis, specifically on Systematic Literature Reviews. Method: This article aims to report the experience of the Portugal Centre For Evidence Based Practice: a JBI Centre of Excellence in the implementation of the Comprehensive Systematic Review Training Program that trains health professionals, researchers, and teachers to develop Systematic Reviews, according to the JBI approach. Results: By the end of 2020, 11 editions of the course had been developed with 136 participants from different educational and health institutions, from different countries. As a result of the training of these participants, 13 systematic reviews were published in JBI Evidence Synthesis and 10 reviews were published in other journals. Conclusion: The reported results and the students’ satisfaction evaluation allow us to emphasize the relevance of the course for health professionals training on evidence synthesis.


2020 ◽  
Author(s):  
Ingeborg Jäger-Dengler-Harles ◽  
◽  
Tamara Heck ◽  
Marc Rittberger ◽  
◽  
...  

Introduction. Systematic reviews are a method to synthesise research results for evidence-based decision-making on a specific question. Processes of information seeking and behaviour play a crucial role and might intensively influence the outcomes of a review. This paper proposes an approach to understand the relevance assessment and decision-making of researchers that conduct systematic reviews. Method. A systematic review was conducted to build up a database for text-based qualitative analyses of researchers’ decision-making in review processes. Analysis. The analysis focuses on the selection process of retrieved articles and introduces the method to investigate relevance assessment processes of researchers. Results. There are different methods to conduct reviews in research, and relevance assessment of documents within those processes is neither one-directional nor standardised. Research on information behaviour of researchers involved in those reviews has not looked at relevance assessment steps and their influence in a review’s outcomes. Conclusions. A reason for the varieties and inconsistencies of review types might be that information seeking and relevance assessment are much more complex and researchers might not be able to draw upon their concrete decisions. This paper proposes a research study to investigate researcher behaviour while synthesising research results for evidence-based decision-making.


Author(s):  
Derick W. Brinkerhoff ◽  
Sarah Frazer ◽  
Lisa McGregor-Mirghani

Adaptive programming and management principles focused on learning, experimentation, and evidence-based decision making are gaining traction with donor agencies and implementing partners in international development. Adaptation calls for using learning to inform adjustments during project implementation. This requires information gathering methods that promote reflection, learning, and adaption, beyond reporting on pre-specified data. A focus on adaptation changes traditional thinking about program cycle. It both erases the boundaries between design, implementation, and evaluation and reframes thinking to consider the complexity of development problems and nonlinear change pathways.Supportive management structures and processes are crucial for fostering adaptive management. Implementers and donors are experimenting with how procurement, contracting, work planning, and reporting can be modified to foster adaptive programming. Well-designed monitoring, evaluation, and learning systems can go beyond meeting accountability and reporting requirements to produce data and learning for evidence-based decision making and adaptive management. It is important to continue experimenting and learning to integrate adaptive programming and management into the operational policies and practices of donor agencies, country partners, and implementers. We need to devote ongoing effort to build the evidence base for the contributions of adaptive management to achieving international development results.


Author(s):  
Aminu Bello ◽  
Ben Vandermeer ◽  
Natasha Wiebe ◽  
Amit X. Garg ◽  
Marcello Tonelli

Sign in / Sign up

Export Citation Format

Share Document