Considerations for Evidence Frameworks in Education Research

2021 ◽  
Vol 45 (1) ◽  
pp. 101-128
Author(s):  
Joseph A. Taylor ◽  
Elisabeth Davis ◽  
Laura E. Michaelson

In this chapter, we describe and compare the standards for evidence used by three entities that review studies of education interventions: Blueprints for Healthy Youth Development, Social Programs that Work, and the What Works Clearinghouse. Based on direct comparisons of the evidence frameworks, we identify key differences in the level at which effectiveness ratings are granted (i.e., intervention vs. outcome domain), as well as in how each entity prioritizes intervention documentation, researcher independence, and sustained versus immediate effects. Because such differences in priorities may result in contradictory intervention ratings between entities, we offer a number of recommendations for a common set of standards that would harmonize effectiveness ratings across the three entities while preserving differences that allow for variation in user priorities. These include disentangling study rigor from intervention effectiveness, ceasing vote counting procedures, adding replication criteria, adding fidelity criteria, assessing baseline equivalence for randomized studies, making quasi-experiments eligible for review, adding criteria for researcher independence, and providing effectiveness ratings at the level of the outcome domain rather than the intervention.

2016 ◽  
Vol 45 (8) ◽  
pp. 454-459 ◽  
Author(s):  
David B. Malouf ◽  
Juliana M. Taymans

An analysis was conducted of the What Works Clearinghouse (WWC) research evidence base on the effectiveness of replicable education interventions. Most interventions were found to have little or no support from technically adequate research studies, and intervention effect sizes were of questionable magnitude to meet education policy goals. These findings painted a dim picture of the evidence base on education interventions and indicated a need for new approaches, including a reexamination of federal reliance on experimental impact research as the basis for gauging intervention effectiveness.


Author(s):  
Abigail A. Fagan ◽  
J. David Hawkins ◽  
Richard F. Catalano ◽  
David P. Farrington

This chapter discusses the challenges of identifying preventive interventions as effective and of assisting community coalitions to learn about and select EBIs that are a good fit for their community. The scientific standards used to determine intervention effectiveness by various lists of “what works” are compared, and the Blueprints for Healthy Youth Development is highlighted as the database used by coalitions implementing the CTC system. The importance of community resource assessments is also discussed, including the steps taken by CTC coalitions to evaluate their current resources, identify gaps in the delivery of preventive interventions, and determine if current services need to be expanded or new EBIs should be implemented.


2021 ◽  
Vol 102 (7) ◽  
pp. 4-4
Author(s):  
Rafael Heller

Since the field of education research emerged, complaints have proliferated about its quality and researchers’ failure to share findings with practitioners. Federal initiatives such as the What Works Clearinghouse have sought to increase the availability of research, but many researchers have continued to be disconnected from practicing K-12 educators. Rafael Heller explains that the research-practice partnerships described in the April 2021 Kappan show promise for bridging the divide.


2020 ◽  
Author(s):  
Corey Peltier ◽  
Tiffany K Peltier ◽  
Taylor Werthen ◽  
Andy Heuer

Access to high-quality resources is integral for educators to provide research-aligned mathematics instruction. Identifying the supplemental resources educators use to plan mathematics instruction can inform the ways researchers and organizations disseminate research-based practices. The goal of this study was to identify the frequency in which early childhood educators (i.e., pre-Kindergarten through third grade) reported using various resources to plan for mathematics instruction. Furthermore, we investigated whether differences were observed based on teacher factors (i.e., general or special education, route to certification, years of experience) and locale (i.e., rural, urban, suburban). We retained data from 917 teachers for data analysis. The three most frequently reported resources by educators were colleagues, Teachers Pay Teachers, and Google/Yahoo. The three least frequently reported resources were the typical outlets researchers use to reach teachers: What Works Clearinghouse, Teaching Exceptional Children, and Teaching Children Mathematics. General and special education teachers differed on their self-reported usage of five resources: colleagues, Google/Yahoo, teaching blogs, Teaching Exceptional Children, and the What Works Clearinghouse. Rural educators self-reported that they were less likely than suburban educators to use colleagues or specialists at the district to plan instruction. Implications for future research and practice are discussed.


2013 ◽  
Vol 79 (2) ◽  
pp. 163-180
Author(s):  
Bryan G. Cook ◽  
Lysandra Cook ◽  
Timothy J. Landrum

Although researchers in special education have made significant advances in defining and identifying evidence-based practices, scholars often constitute an insular group that disseminates research findings primarily through outlets and venues targeting like-minded researchers using traditional approaches. Thus, despite tangible results in determining what works, using dissemination approaches that fail to resonate with or influence practitioners represents an important but often overlooked contributor to the ongoing research-to-practice gap in special education. The authors argue that empirical and theoretical literature outside of special education may offer insight into how ideas take hold, which may be especially relevant to the effective dissemination of evidence-based practices. Drawing on Heath and Heath's (2008) model, the authors describe 6 characteristics of messages that are likely to “stick”: (a) simple, (b) unexpected, (c) concrete, (d) credible, (e) emotional, and (f) stories. The authors consider each in terms of implications for dissemination of special education research findings, and urge special education researchers to consider researching, refining, and applying dissemination strategies that can make special education research matter on a broader scale.


2020 ◽  
pp. 104420732093404 ◽  
Author(s):  
Collin Shepley ◽  
Kathleen N. Zimmerman ◽  
Kevin M. Ayres

The implementation of research-based practices by teachers in public school classrooms is required under federal law as expressed in the Individuals With Disabilities Education Improvement Act of 2004. To aid teachers in identifying such practices, researchers conduct systematic reviews of the educational literature. Although recent attention has been given to changes in the quality of these reviews, there has been minimal discussion about changes in the quality of the studies that comprise them. Specifically, to what extent have educational policies leading to the creation of experimental design standards resulted in a change in the rigor of educational research? Using a subset of the single-case literature commonly published in special education journals, we estimate the impact of What Works Clearinghouse single-case design standards on the trend in the rigor of single-case studies using a comparative interrupted time series framework. Within this subset of single-case studies, our estimation strategy did not detect a change in the trend of the rigor of single-case research following the establishment of What Works Clearinghouse single-case design standards. Implications are discussed for practitioners and researchers. Study data, syntax, and supplemental materials are available for public use at https://osf.io/xp7wv/.


2018 ◽  
Vol 49 (1) ◽  
pp. 104-110 ◽  
Author(s):  
Kathleen Melhuish ◽  
Eva Thanheiser

As mathematics education researchers, our goal in publishing papers is to advance the field. To contribute in this manner, we must value not just novelty but also rigorous science that tests the generalizability of work in our field. This is especially important in education research, where it is impossible to have the clear, delineated, randomized studies that may exist in the hard sciences. Each study is situated in any number of contextual variables, from the particular group of students and teachers to the nature of any particular school setting. In this issue, we present two sets of replication studies (Melhuish, 2018, and Thanheiser, 2018) aiming to confirm, refute, and expand prior work. In the same issue, Schoenfeld (2018) and Star (2018) comment on these studies by raising greater questions about when replication studies are warranted in mathematics education, which studies should be published, and what exactly is meant by replication studies. We respond to the challenges posed by Schoenfeld and Star by making two points. To meet generalization goals,


Sign in / Sign up

Export Citation Format

Share Document