scholarly journals Dynamic meta-analysis: a method of using global evidence for local decision making

BMC Biology ◽  
2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Gorm E. Shackelford ◽  
Philip A. Martin ◽  
Amelia S. C. Hood ◽  
Alec P. Christie ◽  
Elena Kulinskaya ◽  
...  

Abstract Background Meta-analysis is often used to make generalisations across all available evidence at the global scale. But how can these global generalisations be used for evidence-based decision making at the local scale, if the global evidence is not perceived to be relevant to local decisions? We show how an interactive method of meta-analysis—dynamic meta-analysis—can be used to assess the local relevance of global evidence. Results We developed Metadataset (www.metadataset.com) as a proof-of-concept for dynamic meta-analysis. Using Metadataset, we show how evidence can be filtered and weighted, and results can be recalculated, using dynamic methods of subgroup analysis, meta-regression, and recalibration. With an example from agroecology, we show how dynamic meta-analysis could lead to different conclusions for different subsets of the global evidence. Dynamic meta-analysis could also lead to a rebalancing of power and responsibility in evidence synthesis, since evidence users would be able to make decisions that are typically made by systematic reviewers—decisions about which studies to include (e.g. critical appraisal) and how to handle missing or poorly reported data (e.g. sensitivity analysis). Conclusions In this study, we show how dynamic meta-analysis can meet an important challenge in evidence-based decision making—the challenge of using global evidence for local decisions. We suggest that dynamic meta-analysis can be used for subject-wide evidence synthesis in several scientific disciplines, including agroecology and conservation biology. Future studies should develop standardised classification systems for the metadata that are used to filter and weight the evidence. Future studies should also develop standardised software packages, so that researchers can efficiently publish dynamic versions of their meta-analyses and keep them up-to-date as living systematic reviews. Metadataset is a proof-of-concept for this type of software, and it is open source. Future studies should improve the user experience, scale the software architecture, agree on standards for data and metadata storage and processing, and develop protocols for responsible evidence use.

Author(s):  
Gorm E. Shackelford ◽  
Philip A. Martin ◽  
Amelia S. C. Hood ◽  
Alec P. Christie ◽  
Elena Kulinskaya ◽  
...  

AbstractMeta-analysis is often used to make generalizations across all available evidence at the global scale. But how can these global generalizations be used for evidence-based decision making at the local scale, if only the local evidence is perceived to be relevant to a local decision? We show how an interactive method of meta-analysis — dynamic meta-analysis — can be used to assess the local relevance of global evidence. We developed Metadataset (www.metadataset.com) as an example of dynamic meta-analysis. Using Metadataset, we show how evidence can be filtered and weighted, and results can be recalculated, using dynamic methods of subgroup analysis, meta-regression, and recalibration. With an example from agroecology, we show how dynamic meta-analysis could lead to different conclusions for different subsets of the global evidence. Dynamic meta-analysis could also lead to a rebalancing of power and responsibility in evidence synthesis, since evidence users would be able to make decisions that are typically made by systematic reviewers — decisions about which studies to include (e.g., critical appraisal) and how to handle missing or poorly reported data (e.g., sensitivity analysis). We suggest that dynamic meta-analysis could be scaled up and used for subject-wide evidence synthesis in several scientific disciplines (e.g., agroecology and conservation biology). However, the metadata that are used to filter and weight the evidence would need to be standardized within disciplines.


Author(s):  
Aminu Bello ◽  
Ben Vandermeer ◽  
Natasha Wiebe ◽  
Amit X. Garg ◽  
Marcello Tonelli

2021 ◽  
Author(s):  
Trina Rytwinski ◽  
Steven J Cooke ◽  
Jessica J Taylor ◽  
Dominique Roche ◽  
Paul A Smith ◽  
...  

Evidence-based decision-making often depends on some form of a synthesis of previous findings. There is growing recognition that systematic reviews, which incorporate a critical appraisal of evidence, are the gold standard synthesis method in applied environmental science. Yet, on a daily basis, environmental practitioners and decision-makers are forced to act even if the evidence base to guide them is insufficient. For example, it is not uncommon for a systematic review to conclude that an evidence base is large but of low reliability. There are also instances where the evidence base is sparse (e.g., one or two empirical studies on a particular taxa or intervention), and no additional evidence arises from a systematic review. In some cases, the systematic review highlights considerable variability in the outcomes of primary studies, which in turn generates ambiguity (e.g., potentially context specific). When the environmental evidence base is ambiguous, biased, or lacking of new information, practitioners must still make management decisions. Waiting for new, higher validity research to be conducted is often unrealistic as many decisions are urgent. Here, we identify the circumstances that can lead to ambiguity, bias, and the absence of additional evidence arising from systematic reviews and provide practical guidance to resolve or handle these scenarios when encountered. Our perspective attempts to highlight that, with evidence synthesis, there may be a need to balance the spirit of evidence-based decision-making and the practical reality that management and conservation decisions and action is often time sensitive.


2020 ◽  
Author(s):  
Arielle Marks-Anglin ◽  
Yong Chen

Publication bias is a well-known threat to the validity of meta-analyses and, more broadly, the reproducibility of scientific findings. When policies and recommendations are predicated on an incomplete evidence-base, it undermines the goals of evidence-based decision-making. Great strides have been made in the last fifty years to understand and address this problem, including calls for mandatory trial registration and the development of statistical methods to detect and correct for publication bias. We offer an historical account of seminal contributions by the evidence synthesis community, with an emphasis on the parallel development of graph-based and selection model approaches. We also draw attention to current innovations and opportunities for future methodological work.


Sign in / Sign up

Export Citation Format

Share Document