scholarly journals A practical conservation tool to combine diverse types of evidence for transparent evidence-based decision-making

2021 ◽  
Author(s):  
Alec Philip Christie ◽  
Harriet Downey ◽  
Winifred F Frick ◽  
Matthew Grainger ◽  
David O'Brien ◽  
...  

Making the reasoning and evidence behind conservation decisions clear and transparent is a key challenge for the conservation community. Similarly, combining evidence from diverse sources (e.g., scientific vs non-scientific information) into decision-making is also difficult. Our group of conservation researchers and practitioners has co-produced an intuitive tool and template (Evidence-to-Decision (E2D) tool: www.evidence2decisiontool.com) to guide practitioners through a structured process to transparently document and report the evidence and reasoning behind decisions. The tool has three major steps: 1. Define the Decision Context; 2. Gather Evidence; and 3. Make an Evidence-Based Decision. In each step, practitioners enter information (e.g., from the scientific literature, practitioner knowledge and experience, and costs) to inform their decision-making and document their reasoning. The tool packages this information into a customised downloadable report (or is documented if using the offline template), which we hope can stimulate the exchange of information on decisions within and between organisations. By enabling practitioners to revisit how and why past decisions were made, and integrate diverse forms of evidence, we believe our open-access tool’s template can help increase the transparency and quality of decision-making in conservation.

Author(s):  
Andrew W. Brown ◽  
Tapan S. Mehta ◽  
David B. Allison

When we rely on science to inform decisions about matters such as the environment, teaching strategies, economics, government, and medicine, evidence-based decision-making can only be as reliable as the totality of the science itself. We must avoid distortions of the scientific literature such as publication bias, which is an expected systematic difference between estimates of associations, causal effects, or other quantities of interest compared to the actual values of those quantities, caused by differences between research that is published and the totality of research conducted. Publication bias occurs when the probability of publishing a result of a study is influenced by the result obtained. It appears to be common and can produce misleading conclusions about interventions, make effects appear greater than they are, lead to irreproducible research, and ultimately undermine the credibility of science in general. Methods to detect publication bias and steps to reduce it are discussed.


2014 ◽  
Vol 5 (3) ◽  
pp. 303-308 ◽  
Author(s):  
Marie–Valentine Florin

Collection and provision of scientific information for policy and decision-making is particularly important during emergencies or when uncertainty and ambiguity creates situation of fear and anxiety. This article offers two suggestions for addressing natural or technology risks, leveraging research by the International Risk Governance Council (IRGC) and project contributors. The first advice is that concepts and instruments for risk governance be recognised and used as intermediation between evidence and policy. The second is that the role of the Chief Scientific Adviser in public sector organisations includes those of the Chief Risk Officer. These suggestions could help address the challenge for policymakers to deal with uncertainty and emergency, when little or contradictory evidence is available.


2014 ◽  
Vol 67 (5) ◽  
pp. 790-794 ◽  
Author(s):  
Iván Arribas ◽  
Irene Comeig ◽  
Amparo Urbano ◽  
José Vila

2020 ◽  
pp. 204138662098341
Author(s):  
Marvin Neumann ◽  
A. Susan M. Niessen ◽  
Rob R. Meijer

In personnel- and educational selection, a substantial gap exists between research and practice, since evidence-based assessment instruments and decision-making procedures are underutilized. We provide an overview of studies that investigated interventions to encourage the use of evidence-based assessment methods, or factors related to their use. The most promising studies were grounded in self-determination theory. Training and autonomy in the design of evidence-based assessment methods were positively related to their use, while negative stakeholder perceptions decreased practitioners’ intentions to use evidence-based assessment methods. Use of evidence-based decision-making procedures was positively related to access to such procedures, information to use it, and autonomy over the procedure, but negatively related to receiving outcome feedback. A review of the professional selection literature showed that the implementation of evidence-based assessment was hardly discussed. We conclude with an agenda for future research on encouraging evidence-based assessment practice.


2009 ◽  
Vol 12 ◽  
pp. S12-S17 ◽  
Author(s):  
Gordon G. Liu ◽  
Takashi Fukuda ◽  
Chien Earn Lee ◽  
Vivian Chen ◽  
Qiang Zheng ◽  
...  

2021 ◽  
Vol 24 ◽  
pp. S186
Author(s):  
R. Kumar ◽  
C. Suharlim ◽  
A. Amaris Caruso ◽  
C. Gilmartin ◽  
M. Mehra ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document