scholarly journals Systematic reviews as object to study relevance assessment processes

2020 ◽  
Author(s):  
Ingeborg Jäger-Dengler-Harles ◽  
◽  
Tamara Heck ◽  
Marc Rittberger ◽  
◽  
...  

Introduction. Systematic reviews are a method to synthesise research results for evidence-based decision-making on a specific question. Processes of information seeking and behaviour play a crucial role and might intensively influence the outcomes of a review. This paper proposes an approach to understand the relevance assessment and decision-making of researchers that conduct systematic reviews. Method. A systematic review was conducted to build up a database for text-based qualitative analyses of researchers’ decision-making in review processes. Analysis. The analysis focuses on the selection process of retrieved articles and introduces the method to investigate relevance assessment processes of researchers. Results. There are different methods to conduct reviews in research, and relevance assessment of documents within those processes is neither one-directional nor standardised. Research on information behaviour of researchers involved in those reviews has not looked at relevance assessment steps and their influence in a review’s outcomes. Conclusions. A reason for the varieties and inconsistencies of review types might be that information seeking and relevance assessment are much more complex and researchers might not be able to draw upon their concrete decisions. This paper proposes a research study to investigate researcher behaviour while synthesising research results for evidence-based decision-making.

Author(s):  
Aminu Bello ◽  
Ben Vandermeer ◽  
Natasha Wiebe ◽  
Amit X. Garg ◽  
Marcello Tonelli

2021 ◽  
Author(s):  
Trina Rytwinski ◽  
Steven J Cooke ◽  
Jessica J Taylor ◽  
Dominique Roche ◽  
Paul A Smith ◽  
...  

Evidence-based decision-making often depends on some form of a synthesis of previous findings. There is growing recognition that systematic reviews, which incorporate a critical appraisal of evidence, are the gold standard synthesis method in applied environmental science. Yet, on a daily basis, environmental practitioners and decision-makers are forced to act even if the evidence base to guide them is insufficient. For example, it is not uncommon for a systematic review to conclude that an evidence base is large but of low reliability. There are also instances where the evidence base is sparse (e.g., one or two empirical studies on a particular taxa or intervention), and no additional evidence arises from a systematic review. In some cases, the systematic review highlights considerable variability in the outcomes of primary studies, which in turn generates ambiguity (e.g., potentially context specific). When the environmental evidence base is ambiguous, biased, or lacking of new information, practitioners must still make management decisions. Waiting for new, higher validity research to be conducted is often unrealistic as many decisions are urgent. Here, we identify the circumstances that can lead to ambiguity, bias, and the absence of additional evidence arising from systematic reviews and provide practical guidance to resolve or handle these scenarios when encountered. Our perspective attempts to highlight that, with evidence synthesis, there may be a need to balance the spirit of evidence-based decision-making and the practical reality that management and conservation decisions and action is often time sensitive.


Evaluation ◽  
2005 ◽  
Vol 11 (1) ◽  
pp. 95-109 ◽  
Author(s):  
William R. Shadish ◽  
Salvador Chacón-Moscoso ◽  
Julio Sánchez-Meca

2014 ◽  
Vol 67 (5) ◽  
pp. 790-794 ◽  
Author(s):  
Iván Arribas ◽  
Irene Comeig ◽  
Amparo Urbano ◽  
José Vila

2020 ◽  
pp. 204138662098341
Author(s):  
Marvin Neumann ◽  
A. Susan M. Niessen ◽  
Rob R. Meijer

In personnel- and educational selection, a substantial gap exists between research and practice, since evidence-based assessment instruments and decision-making procedures are underutilized. We provide an overview of studies that investigated interventions to encourage the use of evidence-based assessment methods, or factors related to their use. The most promising studies were grounded in self-determination theory. Training and autonomy in the design of evidence-based assessment methods were positively related to their use, while negative stakeholder perceptions decreased practitioners’ intentions to use evidence-based assessment methods. Use of evidence-based decision-making procedures was positively related to access to such procedures, information to use it, and autonomy over the procedure, but negatively related to receiving outcome feedback. A review of the professional selection literature showed that the implementation of evidence-based assessment was hardly discussed. We conclude with an agenda for future research on encouraging evidence-based assessment practice.


Sign in / Sign up

Export Citation Format

Share Document