scholarly journals Clearinghouse Standards of Evidence on the Transparency, Openness, and Reproducibility of Intervention Evaluations

2020 ◽  
Author(s):  
Evan Mayo-Wilson ◽  
Sean Grant ◽  
Lauren Supplee

Clearinghouses are influential repositories of information on the effectiveness of social interventions. To identify which interventions are “evidence-based”, clearinghouses evaluate empirical research using published standards of evidence that focus on study design features. Study designs that support causal inferences are necessary but insufficient for intervention evaluations to produce true results. The use of open science practices can improve the probability that evaluations produce true results and increase trust in research. In this study, we examined the degree to which the policies, procedures, and practices of 10 federal evidence clearinghouses consider the transparency, openness, and reproducibility of intervention evaluations. We found that seven clearinghouses consider at least one open science practice: replication (6 of 10 clearinghouses), public availability of results (6), investigator conflicts of interest (3), design and analysis transparency (3), study registration (2), and protocol sharing (1). We did not identify any policies, procedures, or practices related to analysis plan registration, data sharing, code sharing, materials sharing, and citation standards. Clearinghouse processes and standards could be updated to promote research transparency and reproducibility by reporting whether evaluations used open science practices, incorporating open science practices in their standards for receiving “evidence-based” designations, and verifying that evaluations used open science practices. Doing so could improve research quality, increase trustworthiness of evidence used for policy making, and support the evidence ecosystem to adopt open science practices.

Author(s):  
Evan Mayo-Wilson ◽  
Sean Grant ◽  
Lauren H. Supplee

AbstractClearinghouses are influential repositories of information on the effectiveness of social interventions. To identify which interventions are “evidence-based,” clearinghouses review intervention evaluations using published standards of evidence that focus primarily on internal validity and causal inferences. Open science practices can improve trust in evidence from evaluations on the effectiveness of social interventions. Including open science practices in clearinghouse standards of evidence is one of many efforts that could increase confidence in designations of interventions as “evidence-based.” In this study, we examined the policies, procedures, and practices of 10 federal evidence clearinghouses that review preventive interventions—an important and influential subset of all evidence clearinghouses. We found that seven consider at least one open science practice when evaluating interventions: replication (6 of 10 clearinghouses), public availability of results (6), investigator conflicts of interest (3), design and analysis transparency (3), study registration (2), and protocol sharing (1). We did not identify any policies, procedures, or practices related to analysis plan registration, data sharing, code sharing, material sharing, and citation standards. We provide a framework with specific recommendations to help federal and other evidence clearinghouses implement the Transparency and Openness Promotion (TOP) Guidelines. Our proposed “TOP Guidelines for Clearinghouses” includes reporting whether evaluations used open science practices, incorporating open science practices in their standards for receiving “evidence-based” designations, and verifying that evaluations used open science practices. Doing so could increase the trustworthiness of evidence used for policy making and support improvements throughout the evidence ecosystem.


Author(s):  
Lauren H. Supplee ◽  
Robert T. Ammerman ◽  
Anne K. Duggan ◽  
John A. List ◽  
Dana Suskind

2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Hollen N. Reischer ◽  
Henry R. Cowan

A robust dialogue about the (un)reliability of psychological science findings has emerged in recent years. In response, metascience researchers have developed innovative tools to increase rigor, transparency, and reproducibility, stimulating rapid improvement and adoption of open science practices. However, existing reproducibility guidelines are geared toward purely quantitative study designs. This leaves some ambiguity as to how such guidelines should be implemented in mixed methods (MM) studies, which combine quantitative and qualitative research. Drawing on extant literature, our own experiences, and feedback from 79 self-identified MM researchers, the current paper addresses two main questions: (a) how and to what extent do existing reproducibility guidelines apply to MM study designs; and (b) can existing reproducibility guidelines be improved by incorporating best practices from qualitative research and epistemology? In answer, we offer 10 key recommendations for use within and outside of MM research. Finally, we argue that good science and good ethical practice are mutually reinforcing and lead to meaningful, credible science.


2019 ◽  
Vol 3 (4) ◽  
Author(s):  
Derek M Isaacowitz ◽  
Majse Lind

Abstract In response to concerns about the replicability of published research, some disciplines have used open science practices to try to enhance the credibility of published findings. Gerontology has been slow to embrace these changes. We argue that open science is important for aging research, both to reduce questionable research practices that may also be prevalent in the field (such as too many reported significant age differences in the literature, underpowered studies, hypothesizing after the results are known, and lack of belief updating when findings do not support theories), as well as to make research in the field more transparent overall. To ensure the credibility of gerontology research moving forward, we suggest concrete ways to incorporate open science into gerontology research: for example, by using available preregistration templates adaptable to a variety of study designs typical for aging research (even secondary analyses of existing data). Larger sample sizes may be achieved by many-lab collaborations. Though using open science practices may make some aspects of gerontology research more challenging, we believe that gerontology needs open science to ensure credibility now and in the future.


2020 ◽  
Author(s):  
Anthony J. Roberson ◽  
Ryan L. Farmer ◽  
Steven Shaw ◽  
Shelley Upton ◽  
Imad Zaheer

Trustworthy scientific evidence is essential if school psychologists are to use evidence-based practices to solve the big problems students, teachers, and schools face. Open science practices promote transparency, accessibility, and robustness of research findings, which increases the trustworthiness of scientific claims. Simply, when researchers, trainers, and practitioners can ‘look under the hood’ of a study, (a) the researchers who conducted the study are likely to be more cautious, (b) reviewers are better able to engage the self-correcting mechanisms of science, and (c) readers have more reason to trust the research findings. We discuss questionable research practices that reduce the trustworthiness of evidence; specific open science practices; applications specific to researchers, trainers, and practitioners in school psychology; and next steps in moving the field toward openness and transparency.


2020 ◽  
Author(s):  
Sean Grant ◽  
Kathleen Wendt ◽  
Bonnie J. Leadbeater ◽  
Lauren H. Supplee ◽  
Evan Mayo-Wilson ◽  
...  

The field of prevention science aims to understand societal problems, identify effective interventions, and translate scientific evidence into policy and practice. There is growing interest among prevention scientists in transparency, openness, and reproducibility. Open science provides opportunities to align scientific practice with scientific ideals, accelerate scientific discovery, and broaden access to scientific knowledge. Open science also addresses key challenges to the credibility of prevention science, such as irreproducibility of results, selective non-reporting (publication bias, outcome reporting bias), and other detrimental research practices. The overarching goal of this paper is to provide an overview of open science practices for prevention science researchers, and to identify key stakeholders and resources to support implementation of these practices. We consider various aspects of applying open science practices in prevention science, such as identifying evidence-based interventions. In addition, we call for the adoption of prevention science practices in the open science movement, such as the use of program planning principles to develop, implement, and evaluate open science efforts. We also identify some challenges that need to be considered in the transition to a transparent, open, and reproducible prevention science. Throughout, we identify activities that will strengthen the reliability and efficiency of prevention science, facilitate access to its products and outputs, and promote collaborative and inclusive participation in research activities. We conclude with the notion that prevention scientists are well-positioned to engage with the open science movement, especially given their expertise in examining and addressing complex social and behavioral issues. By embracing transparency, openness, and reproducibility, prevention science can better achieve its mission to advance evidence-based solutions to promote well-being.


2021 ◽  
Author(s):  
Eric R. Louderback ◽  
Sally M Gainsbury ◽  
Robert Heirene ◽  
Karen Amichia ◽  
Alessandra Grossman ◽  
...  

The replication crisis has stimulated researchers around the world to adopt open science research practices intended to reduce publication bias and improve research quality. Open science practices include study pre-registration, open data, open publication, and avoiding methods that can lead to publication bias and low replication rates. Although gambling studies uses similar research methods to behavioral research fields that have struggled with replication, we know little about the uptake of open science research practices in gambling-focused research. We conducted a scoping review of 500 recent (1/1/2016 – 12/1/2019) studies focused on gambling and problem gambling to examine the use of open science and transparent research practices. Our results showed that a small percentage of studies used most practices: whereas 54.6% (95% CI: [50.2, 58.9]) of studies used at least one of nine open science practices, each practice’s prevalence was: 1.6% for pre-registration (95% CI:[0.8, 3.1]), 3.2% for open data (95% CI:[2.0, 5.1]), 0% for open notebook, 35.2% for open access (95% CI:[31.1, 39.5]), 7.8% for open materials (95% CI:[5.8, 10.5]), 1.4% for open code (95% CI:[0.7, 2.9]), and 15.0% for preprint posting (95% CI:[12.1, 18.4]). In all, 6.4% (95% CI:[4.6, 8.9]) used a power analysis and 2.4% (95% CI:[1.4, 4.2]) of the studies were replication studies. Exploratory analyses showed that studies that used any open science practice, and open access in particular, had higher citation counts. We suggest several practical ways to enhance the uptake of open science principles and practices both within gambling studies and in science more broadly.


2020 ◽  
Author(s):  
Yorgo Hoebeke ◽  
Olivier Desmedt ◽  
Betül Özçimen ◽  
Alexandre Heeren

Introduction. Broadly considered a transdiagnostic feature of psychological disorders, rumination is associated with lower treatment response, slower recovery rates, and higher relapse rates. Accordingly, research has focused on the development of interventions to alleviate rumination. Recently, transcranial Direct Current Stimulation (tDCS) has emerged as a promising tool to do so.Methods. We performed a systematic review of sham-controlled tDCS studies targeting rumination among healthy participants or patients with psychiatric disorders, investigating the effectiveness of tDCS in reducing rumination, and assessing the research quality of this nascent field.Results. We identified nine studies, with five reporting a significant impact of tDCS on rumination. We also outlined a few tDCS parameters (e.g., stimulation duration, electrode size) and research methods' features (e.g., within- versus between-research designs) characterizing those positive-finding studies. However, these studies were characterized by substantial heterogeneity (e.g., methodological flaws, lack of open science practices), precluding any definite statement about the best way to target rumination via tDCS. Moreover, several strong methodological limitations were also present across those studies.Discussion. Although our systematic review identifies the strengths and weaknesses of the available research about the impact of tDCS on rumination, it calls for strong efforts to improve this nascent field's current methodological caveats. We discuss how open science practices can help to usher this field forward.


2021 ◽  
Author(s):  
Katherine S. Corker

Part of what distinguishes science from other ways of knowing is that scientists show their work. Yet when probed, it turns out that much of the process of research is hidden away: in personal files, in undocumented conversations, in point-and-click menus, and so on. In recent years, a movement towards more open science has arisen in psychology. Open science practices capture a broad swath of activities designed to take parts of the research process that were previously known only to a research team and make them more broadly accessible (e.g., open data, open analysis code, pre-registration, open research materials). Such practices increase the value of research by increasing transparency, which may in turn facilitate higher research quality. Plus, open science practices are now required at many journals. This chapter will introduce open science practices and provide plentiful resources for researchers seeking to integrate these practices into their workflow.


Sign in / Sign up

Export Citation Format

Share Document