A Review of the Evidence for Real-Time Performance Feedback to Improve Instructional Practice

2019 ◽  
Vol 54 (2) ◽  
pp. 90-100
Author(s):  
Anne C. Sinclair ◽  
Samantha A. Gesel ◽  
Lauren M. LeJeune ◽  
Christopher J. Lemons

In this comprehensive review, 32 studies were identified in which researchers investigated the effect of real-time performance feedback delivered via technology on interventionist implementation of instructional practices. Studies were evaluated for methodological rigor with quality indicators from the Council for Exceptional Children. Twenty-two single case designs and one group design met all quality indicators. The single case designs were analyzed using visual analysis and given success estimates calculated as a ratio of the number of demonstrated effects to potential demonstrations of effect. Methodologically sound evidence indicates that real-time performance feedback is an evidence-based practice for changing interventionist behavior during intervention sessions. Implications for research and practice are discussed.

2018 ◽  
Vol 40 (3) ◽  
pp. 131-149 ◽  
Author(s):  
Elizabeth A. Stevens ◽  
Sunyoung Park ◽  
Sharon Vaughn

This systematic review examines the effects of summarizing and main idea interventions on the reading comprehension outcomes of struggling readers in Grades 3 through 12. A comprehensive search identified 30 studies published in peer-reviewed journals between 1978 and 2016. Studies included struggling reader participants in Grades 3 through 12; targeted summarizing or main idea instruction; used an experimental, quasi-experimental, or single-case design; and included a reading comprehension outcome. A meta-analysis of 23 group design studies resulted in a statistically significant mean effect of 0.97. Group size, number of sessions, grade level, and publication year did not moderate treatment effect. Visual analysis of six single-case designs yielded strong evidence for retell measures and a range of evidence for short-answer comprehension measures. Findings suggest that main idea and summarizing instruction may improve struggling readers’ main idea identification and reading comprehension. Limitations include the lack of standardized measures and the unreported, changing description of the counterfactual.


Author(s):  
Sunyoung Kim ◽  
Min-Chi Yan ◽  
Jing Wang ◽  
Jenna Lequia

Abstract Poverty as a cultural factor affects students' school success and outcomes. In the current literature review, we aimed at providing a comprehensive analysis of intervention research designed to support school outcomes of students aged 3 to 21 years with disabilities or at risk for developing disabilities in high-poverty contexts. Eighteen studies were included in this review (16 group designs, 1 single case design, and 1 group design with embedded single case), with a total of 1782 student participants. Results indicated that most of the research studies designed for students in poverty focused on their language skills (e.g., reading, vocabulary, literacy) with various interventions. Most of the group design studies met the quality indicators (Gersten et al., 2009) with a low standard, although all single case studies met the quality indicators by higher than 80% (Kratochwill et al., 2013). As for the analysis of cultural responsiveness, we found that most studies provided limited information reflecting culturally responsive research (Trainor & Bal, 2014). Discussion and implication for practice and research are provided.


2020 ◽  
Author(s):  
Marc J Lanovaz ◽  
Jordan D Bailey

Since the start of the 21st century, few advances have had as far reaching consequences in science as the widespread adoption of artificial neural networks in fields as diverse as fundamental physics, clinical medicine, and social networking. In behavior analysis, one promising area for the adoption of neural networks involves the analysis of single-case graphs. However, few behavior analysts have any training on the use of these methods, which may limit progress in this area. The purpose of our tutorial is to address this issue by providing a step-by-step description on using artificial neural networks to improve the analysis of single-case graphs. To this end, we trained a new model using simulated data to analyze multiple baseline graphs and compared its outcomes to those of visual analysis on a previously published dataset. In addition to showing that artificial neural networks may outperform visual analysis, the tutorial provides information to facilitate the replication and extension of this line of work to other datasets and designs.


Methodology ◽  
2010 ◽  
Vol 6 (2) ◽  
pp. 49-58 ◽  
Author(s):  
Rumen Manolov ◽  
Antonio Solanas ◽  
David Leiva

Effect size indices are indispensable for carrying out meta-analyses and can also be seen as an alternative for making decisions about the effectiveness of a treatment in an individual applied study. The desirable features of the procedures for quantifying the magnitude of intervention effect include educational/clinical meaningfulness, calculus easiness, insensitivity to autocorrelation, low false alarm, and low miss rates. Three effect size indices related to visual analysis are compared according to the aforementioned criteria. The comparison is made by means of data sets with known parameters: degree of serial dependence, presence or absence of general trend, and changes in level and/or in slope. The percent of nonoverlapping data showed the highest discrimination between data sets with and without intervention effect. In cases when autocorrelation or trend is present, the percentage of data points exceeding the median may be a better option to quantify the effectiveness of a psychological treatment.


2014 ◽  
Vol 38 (6) ◽  
pp. 878-913 ◽  
Author(s):  
Rumen Manolov ◽  
Vicenta Sierra ◽  
Antonio Solanas ◽  
Juan Botella

In the context of the evidence-based practices movement, the emphasis on computing effect sizes and combining them via meta-analysis does not preclude the demonstration of functional relations. For the latter aim, we propose to augment the visual analysis to add consistency to the decisions made on the existence of a functional relation without losing sight of the need for a methodological evaluation of what stimuli and reinforcement or punishment are used to control the behavior. Four options for quantification are reviewed, illustrated, and tested with simulated data. These quantifications include comparing the projected baseline with the actual treatment measurements, on the basis of either parametric or nonparametric statistics. The simulated data used to test the quantifications include nine data patterns in terms of the presence and type of effect and comprise ABAB and multiple-baseline designs. Although none of the techniques is completely flawless in terms of detecting a functional relation only when it is present but not when it is absent, an option based on projecting split-middle trend and considering data variability as in exploratory data analysis proves to be the best performer for most data patterns. We suggest that the information on whether a functional relation has been demonstrated should be included in meta-analyses. It is also possible to use as a weight the inverse of the data variability measure used in the quantification for assessing the functional relation. We offer an easy to use code for open-source software for implementing some of the quantifications.


2016 ◽  
Vol 38 (3) ◽  
pp. 131-144 ◽  
Author(s):  
Matthew E. Brock ◽  
Erik W. Carter

Teachers and paraprofessionals need effective training to improve their implementation of interventions for students with disabilities. Reviews of the single-case design literature have identified some features associated with effective training for these educators, but the group-design literature has received little attention. This meta-analysis systematically reviews group-design studies testing the efficacy of training to improve implementation of interventions for students with disabilities. The mean effect size of educator training on implementation fidelity was g = 1.08, and results from meta-regression analysis suggest training that involves a combination of two specific training strategies (i.e., modeling and performance feedback) was associated with improved implementation fidelity. Increased duration of training was not associated with larger effects. Considered alongside findings from the single-case design literature, these results suggest that how educators are trained is a more important consideration than the number of hours they spend in training.


Sign in / Sign up

Export Citation Format

Share Document