Summary of Feedback on the What Works Clearinghouse Draft Standards for Scientific Evidence on Educational Effectiveness

Author(s):  
2008 ◽  
Vol 32 (1) ◽  
pp. 101-107
Author(s):  
Genevieve McArthur

The What Works Clearinghouse (WWC) provides online reports to the public about the scientific evidence for educational interventions. The quality of these reports is important because they effectively tell the non‐scientific community which programmes do and do not work. The aim of this brief review is to assess WWC’s report on a clinically popular, yet theoretically controversial, intervention called Fast ForWord® (FFW). Some of the methods used by WWC to assess FFW were problematic: the literature review included studies that had not passed peer review; it failed to include a key study that had passed peer review; alphabetic skills were assessed with phonological awareness outcomes; effectiveness ratings were based on statistical significance; terms peculiar to WWC were not clearly defined; and existing quality control procedures failed to detect an error in the WWC report. These problems could be addressed by making minor adjustments to WWC’s existing methods and by subjecting WWC reports to the scientific peer‐review process before they are released to the public.


2020 ◽  
Author(s):  
Corey Peltier ◽  
Tiffany K Peltier ◽  
Taylor Werthen ◽  
Andy Heuer

Access to high-quality resources is integral for educators to provide research-aligned mathematics instruction. Identifying the supplemental resources educators use to plan mathematics instruction can inform the ways researchers and organizations disseminate research-based practices. The goal of this study was to identify the frequency in which early childhood educators (i.e., pre-Kindergarten through third grade) reported using various resources to plan for mathematics instruction. Furthermore, we investigated whether differences were observed based on teacher factors (i.e., general or special education, route to certification, years of experience) and locale (i.e., rural, urban, suburban). We retained data from 917 teachers for data analysis. The three most frequently reported resources by educators were colleagues, Teachers Pay Teachers, and Google/Yahoo. The three least frequently reported resources were the typical outlets researchers use to reach teachers: What Works Clearinghouse, Teaching Exceptional Children, and Teaching Children Mathematics. General and special education teachers differed on their self-reported usage of five resources: colleagues, Google/Yahoo, teaching blogs, Teaching Exceptional Children, and the What Works Clearinghouse. Rural educators self-reported that they were less likely than suburban educators to use colleagues or specialists at the district to plan instruction. Implications for future research and practice are discussed.


2016 ◽  
Vol 45 (8) ◽  
pp. 454-459 ◽  
Author(s):  
David B. Malouf ◽  
Juliana M. Taymans

An analysis was conducted of the What Works Clearinghouse (WWC) research evidence base on the effectiveness of replicable education interventions. Most interventions were found to have little or no support from technically adequate research studies, and intervention effect sizes were of questionable magnitude to meet education policy goals. These findings painted a dim picture of the evidence base on education interventions and indicated a need for new approaches, including a reexamination of federal reliance on experimental impact research as the basis for gauging intervention effectiveness.


2020 ◽  
pp. 104420732093404 ◽  
Author(s):  
Collin Shepley ◽  
Kathleen N. Zimmerman ◽  
Kevin M. Ayres

The implementation of research-based practices by teachers in public school classrooms is required under federal law as expressed in the Individuals With Disabilities Education Improvement Act of 2004. To aid teachers in identifying such practices, researchers conduct systematic reviews of the educational literature. Although recent attention has been given to changes in the quality of these reviews, there has been minimal discussion about changes in the quality of the studies that comprise them. Specifically, to what extent have educational policies leading to the creation of experimental design standards resulted in a change in the rigor of educational research? Using a subset of the single-case literature commonly published in special education journals, we estimate the impact of What Works Clearinghouse single-case design standards on the trend in the rigor of single-case studies using a comparative interrupted time series framework. Within this subset of single-case studies, our estimation strategy did not detect a change in the trend of the rigor of single-case research following the establishment of What Works Clearinghouse single-case design standards. Implications are discussed for practitioners and researchers. Study data, syntax, and supplemental materials are available for public use at https://osf.io/xp7wv/.


2017 ◽  
Vol 38 (4) ◽  
pp. 233-245 ◽  
Author(s):  
Min Kyung Kim ◽  
John William McKenna ◽  
Yujeong Park

The purpose of this study was to investigate the evidence base for using computer-assisted instruction (CAI) to improve the reading comprehension of students with learning disabilities (LD). Twelve peer-reviewed studies (seven comparison group studies, five single-case studies) met selection criteria and were evaluated according to the relevant What Works Clearinghouse (WWC) procedures and standards. Results showed that seven studies (five comparison group and two single-case studies) met WWC standards with or without reservations. Key instructional features employed in CAI studies meeting the WWC standards without reservations included practice opportunities, self-correction and immediate corrective feedback, teacher-directed instruction, and contingencies for enhancing student motivation and engagement. Implications for future research and suggestions for using quality indicators to improve the rigor of future CAI investigations are discussed.


2014 ◽  
pp. 44-52
Author(s):  
Juan C. Ripoll Salceda

Se contrasta qué programas de mejora de la comprensión lectora destinados a alumnos hablantes de español cumplen los criterios de What Works Clearinghouse para ser considerados intervenciones basadas en evidencias. Para ello se evaluaron los estudios incluidos en una revisión sistemática. Tras calcular el tamaño de los efectos encontrados en cada investigación y ajustar la significatividad de sus resultados no se pudo constatar que ninguna de las intervenciones alcanzase la calificación de “con efectos positivos”. Cuatro se valoraron como “con efectos potencialmente positivos”, otras dos como “sin efectos” y otra más como con efectos potencialmente negativos.  Se detecta, por tanto, una grave carencia en la investigación sobre métodos de mejora de la comprensión lectora para alumnos hablantes de español.


Sign in / Sign up

Export Citation Format

Share Document