Concurrent validity and classification accuracy of curriculum-based measurement for written expression.

2016 ◽  
Vol 31 (3) ◽  
pp. 369-382 ◽  
Author(s):  
William M. Furey ◽  
Amanda M. Marcotte ◽  
John M. Hintze ◽  
Caroline M. Shackett
Diagnostique ◽  
1997 ◽  
Vol 23 (1) ◽  
pp. 203-211
Author(s):  
Cynthia A. Riccio ◽  
Candace H. Boan ◽  
Deborah Staniszewski ◽  
George W. Hynd

2017 ◽  
Vol 43 (3) ◽  
pp. 131-143 ◽  
Author(s):  
Pyung-Gang Jung ◽  
Kristen L. McMaster

We examined the classification accuracy of Curriculum-Based Measurement in writing (CBM-W) Picture Word prompts scored for words written (WW), words spelled correctly (WSC), and correct word sequences (CWS). First graders ( n = 133) were administered CBM-W prompts and the Test of Written Language–Third Edition (TOWL-3; Hammill & Larsen, 1996). Prompts scored for WSC showed acceptable levels of sensitivity (.947) and specificity (.587) with the TOWL-3 Contextual Language. Positive predictive values were low (approximately .20 to .30), and negative predictive values were high (mostly above .95). Overall classification accuracy, represented by the area under curve (AUC), ranged from .727 to .831. Further research regarding ways to improve classification accuracy of CBM-W and preliminary implications for practice are discussed.


2021 ◽  
Author(s):  
Milena A. Keller-Margulis ◽  
Sterett Mercer ◽  
MICHAEL MATTA

Existing approaches to measuring writing performance are insufficient in terms of both technical adequacy as well as feasibility for use as a screening measure. This study examined the validity and diagnostic accuracy of several approaches to automated essay scoring as well as written expression curriculum-based measurement (WE-CBM) to determine whether an automated approach improves technical adequacy. A sample of 140 fourth grade students generated writing samples that were then scored using traditional and automated approaches and examined with the statewide measure of writing performance. Results indicated that the validity and diagnostic accuracy for the best performing WE-CBM metric, correct minus incorrect word sequences (CIWS) and the automated approaches to scoring were comparable with automated approaches offering potentially improved feasibility for use in screening. Averaging scores across three time points was necessary, however, in order to achieve improved validity and adequate levels of diagnostic accuracy across the scoring approaches. Limitations, implications, and directions for future research regarding the use of automated scoring approaches for screening are discussed.


2018 ◽  
Vol 45 (1) ◽  
pp. 51-64 ◽  
Author(s):  
Anita M. Payan ◽  
Milena Keller-Margulis ◽  
Andrea B. Burridge ◽  
Samuel D. McQuillin ◽  
Kristen S. Hassett

National test data indicate that some students do not perform well in writing, suggesting a need to identify students at risk for poor performance. Research supports Written Expression Curriculum-Based Measurement (WE-CBM) as an indicator of writing proficiency, but it is less commonly used in practice. This study examined the usability of WE-CBM compared with Reading Curriculum-Based Measurement (R-CBM). Participants included 162 teachers who were given examples of WE-CBM and R-CBM and then completed a usability measure for both curriculum-based measurement (CBM) types. Teachers not only rated WE-CBM as usable but also rated R-CBM significantly higher in usability, with no significant differences in acceptability. Practical implications that may inform modifications to WE-CBM are discussed.


2013 ◽  
Vol 51 (1) ◽  
pp. 85-96 ◽  
Author(s):  
Jamie Y. Fearrington ◽  
Patricia D. Parker ◽  
Pamela Kidder-Ashley ◽  
Sandra G. Gagnon ◽  
Sara McCane-Bowling ◽  
...  

2021 ◽  
pp. 082957352098775
Author(s):  
Sterett H. Mercer ◽  
Joanna E. Cannon ◽  
Bonita Squires ◽  
Yue Guo ◽  
Ella Pinco

We examined the extent to which automated written expression curriculum-based measurement (aWE-CBM) can be accurately used to computer score student writing samples for screening and progress monitoring. Students ( n = 174) with learning difficulties in Grades 1 to 12 who received 1:1 academic tutoring through a community-based organization completed narrative writing samples in the fall and spring across two academic years. The samples were evaluated using four automated and hand-calculated WE-CBM scoring metrics. Results indicated automated and hand-calculated scores were highly correlated at all four timepoints for counts of total words written ( rs = 1.00), words spelled correctly ( rs = .99–1.00), correct word sequences (CWS; rs = .96–.97), and correct minus incorrect word sequences (CIWS; rs = .86–.92). For CWS and CIWS, however, automated scores systematically overestimated hand-calculated scores, with an unacceptable amount of error for CIWS for some types of decisions. These findings provide preliminary evidence that aWE-CBM can be used to efficiently score narrative writing samples, potentially improving the feasibility of implementing multi-tiered systems of support in which the written expression skills of large numbers of students are screened and monitored.


Sign in / Sign up

Export Citation Format

Share Document