Generalizability theory reliability of written expression curriculum-based measurement in universal screening.

2016 ◽  
Vol 31 (3) ◽  
pp. 383-392 ◽  
Author(s):  
Milena A. Keller-Margulis ◽  
Sterett H. Mercer ◽  
Erin L. Thomas
2004 ◽  
Vol 33 (2) ◽  
pp. 218-233 ◽  
Author(s):  
Scott P. Ardoin ◽  
Joseph C. Witt ◽  
Shannon M. Suldo ◽  
James E. Connell ◽  
Jennifer L. Koenig ◽  
...  

2016 ◽  
Vol 31 (3) ◽  
pp. 369-382 ◽  
Author(s):  
William M. Furey ◽  
Amanda M. Marcotte ◽  
John M. Hintze ◽  
Caroline M. Shackett

2021 ◽  
Author(s):  
Milena A. Keller-Margulis ◽  
Sterett Mercer ◽  
MICHAEL MATTA

Existing approaches to measuring writing performance are insufficient in terms of both technical adequacy as well as feasibility for use as a screening measure. This study examined the validity and diagnostic accuracy of several approaches to automated essay scoring as well as written expression curriculum-based measurement (WE-CBM) to determine whether an automated approach improves technical adequacy. A sample of 140 fourth grade students generated writing samples that were then scored using traditional and automated approaches and examined with the statewide measure of writing performance. Results indicated that the validity and diagnostic accuracy for the best performing WE-CBM metric, correct minus incorrect word sequences (CIWS) and the automated approaches to scoring were comparable with automated approaches offering potentially improved feasibility for use in screening. Averaging scores across three time points was necessary, however, in order to achieve improved validity and adequate levels of diagnostic accuracy across the scoring approaches. Limitations, implications, and directions for future research regarding the use of automated scoring approaches for screening are discussed.


Author(s):  
Juan Jiménez ◽  
Sara del Cristo de León

This study analyses the factorial structure of Mathematics Progress Indicators (IPAM), by using Confirmatory Factorial Analysis (CFA) techniques. For this purpose, a longitudinal study was carried out with a sample of 176 first-grade students from the Canary Islands, IPAM was administered to the study sample three times throughout the scholar year. The IPAM is a Curriculum-Based Measurement (CBM) instrument composed of three alternative or parallel measures (A, B and C) that try to measure the same latent structure (i.e., number sense). The IPAM measures were administered three times per year (i.e., fall, winter, spring). Its main objective is the universal screening and evaluation of students' mathematics learning progress in the elementary grades through the evaluation of fluency for different tasks (i.e., magnitude comparison, one-digit operations, two-digit operations, missing number, and position value). Fluency is measured by counting the number of correct answers given by the student at a given time. The results of the CFA confirm a good fit of the proposed model for the different moments of measurement.


2018 ◽  
Vol 45 (1) ◽  
pp. 51-64 ◽  
Author(s):  
Anita M. Payan ◽  
Milena Keller-Margulis ◽  
Andrea B. Burridge ◽  
Samuel D. McQuillin ◽  
Kristen S. Hassett

National test data indicate that some students do not perform well in writing, suggesting a need to identify students at risk for poor performance. Research supports Written Expression Curriculum-Based Measurement (WE-CBM) as an indicator of writing proficiency, but it is less commonly used in practice. This study examined the usability of WE-CBM compared with Reading Curriculum-Based Measurement (R-CBM). Participants included 162 teachers who were given examples of WE-CBM and R-CBM and then completed a usability measure for both curriculum-based measurement (CBM) types. Teachers not only rated WE-CBM as usable but also rated R-CBM significantly higher in usability, with no significant differences in acceptability. Practical implications that may inform modifications to WE-CBM are discussed.


Sign in / Sign up

Export Citation Format

Share Document