How accurate are interpretations of curriculum-based measurement progress monitoring data? Visual analysis versus decision rules

2016 ◽  
Vol 58 ◽  
pp. 41-55 ◽  
Author(s):  
Ethan R. Van Norman ◽  
Theodore J. Christ
2017 ◽  
Vol 36 (1) ◽  
pp. 74-81 ◽  
Author(s):  
John M. Hintze ◽  
Craig S. Wells ◽  
Amanda M. Marcotte ◽  
Benjamin G. Solomon

This study examined the diagnostic accuracy associated with decision making as is typically conducted with curriculum-based measurement (CBM) approaches to progress monitoring. Using previously published estimates of the standard errors of estimate associated with CBM, 20,000 progress-monitoring data sets were simulated to model student reading growth of two-word increase per week across 15 consecutive weeks. Results indicated that an unacceptably high proportion of cases were falsely identified as nonresponsive to intervention when a common 4-point decision rule was applied, under the context of typical levels of probe reliability. As reliability and stringency of the decision-making rule increased, such errors decreased. Findings are particularly relevant to those who use a multi-tiered response-to-intervention model for evaluating formative changes associated with instructional intervention and evaluating responsiveness to intervention across multiple tiers of intervention.


2017 ◽  
Vol 43 (2) ◽  
pp. 110-120 ◽  
Author(s):  
Ethan R. Van Norman ◽  
David C. Parker

Recent simulations suggest that trend line decision rules applied to curriculum-based measurement of reading progress monitoring data may lead to inaccurate interpretations unless data are collected for upward of 3 months. The authors of those studies did not manipulate goal line slope or account for a student’s level of initial performance when evaluating the accuracy of progress monitoring decisions. We explored how long progress needs to be monitored before ineffective interventions can be accurately identified using actual data. We calculated classification accuracy statistics to evaluate the extent to which recommendations from three common and two novel decision rules correctly predicted spring performance across six levels of duration (8, 10, . . . 18 weeks) and two goal types (normative and default spring benchmark). Comparing the median of the last three observations as well as current trend with expected performance at a given week consistently yielded higher positive agreement rates than data point or prediction-based decision rules. Decision rule performance improved as duration increased, but a point of diminishing returns was observed. Decisions based on normative goals yielded consistently higher chance-corrected agreement outcomes.


Author(s):  
Evan H. Dart ◽  
Ethan R. Van Norman ◽  
David A. Klingbeil ◽  
Keith C. Radley

1992 ◽  
Vol 21 (2) ◽  
pp. 300-312
Author(s):  
Richard Parker ◽  
Gerald Tindal ◽  
Stephanie Stein

2017 ◽  
Vol 32 (1) ◽  
pp. 22-31 ◽  
Author(s):  
Dana L. Wagner ◽  
Stephanie M. Hammerschmidt-Snidarich ◽  
Christine A. Espin ◽  
Kathleen Seifert ◽  
Kristen L. McMaster

2017 ◽  
Vol 36 (1) ◽  
pp. 55-73 ◽  
Author(s):  
Theodore J. Christ ◽  
Christopher David Desjardins

Curriculum-Based Measurement of Oral Reading (CBM-R) is often used to monitor student progress and guide educational decisions. Ordinary least squares regression (OLSR) is the most widely used method to estimate the slope, or rate of improvement (ROI), even though published research demonstrates OLSR’s lack of validity and reliability, and imprecision of ROI estimates, especially after brief duration of monitoring (6-10 weeks). This study illustrates and examines the use of Bayesian methods to estimate ROI. Conditions included four progress monitoring durations (6, 8, 10, and 30 weeks), two schedules of data collection (weekly, biweekly), and two ROI growth distributions that broadly corresponded with ROIs for general and special education populations. A Bayesian approach with alternate prior distributions for the ROIs is presented and explored. Results demonstrate that Bayesian estimates of ROI were more precise than OLSR with comparable reliabilities, and Bayesian estimates were consistently within the plausible range of ROIs in contrast to OLSR, which often provided unrealistic estimates. Results also showcase the influence the priors had estimated ROIs and the potential dangers of prior distribution misspecification.


Sign in / Sign up

Export Citation Format

Share Document