curriculum based measurement
Recently Published Documents


TOTAL DOCUMENTS

374
(FIVE YEARS 33)

H-INDEX

39
(FIVE YEARS 2)

2021 ◽  
pp. 153450842110402
Author(s):  
Benjamin G. Solomon ◽  
Ole J. Forsberg ◽  
Monelle Thomas ◽  
Brittney Penna ◽  
Katherine M. Weisheit

Bayesian regression has emerged as a viable alternative for the estimation of curriculum-based measurement (CBM) growth slopes. Preliminary findings suggest such methods may yield improved efficiency relative to other linear estimators and can be embedded into data management programs for high-frequency use. However, additional research is needed, as Bayesian estimators require multiple specifications of the prior distributions. The current study evaluates the accuracy of several combinations of prior values, including three distributions of the residuals, two values of the expected growth rate, and three possible values for the precision of slope when using Bayesian simple linear regression to estimate fluency growth slopes for reading CBM. We also included traditional ordinary least squares (OLS) as a baseline contrast. Findings suggest that the prior specification for the residual distribution had, on average, a trivial effect on the accuracy of the slope. However, specifications for growth rate and precision of slope were influential, and virtually all variants of Bayesian regression evaluated were superior to OLS. Converging evidence from both simulated and observed data now suggests Bayesian methods outperform OLS for estimating CBM growth slopes and should be strongly considered in research and practice.


Author(s):  
Evan H. Dart ◽  
Ethan R. Van Norman ◽  
David A. Klingbeil ◽  
Keith C. Radley

2021 ◽  
Author(s):  
Milena A. Keller-Margulis ◽  
Sterett Mercer ◽  
MICHAEL MATTA

Existing approaches to measuring writing performance are insufficient in terms of both technical adequacy as well as feasibility for use as a screening measure. This study examined the validity and diagnostic accuracy of several approaches to automated essay scoring as well as written expression curriculum-based measurement (WE-CBM) to determine whether an automated approach improves technical adequacy. A sample of 140 fourth grade students generated writing samples that were then scored using traditional and automated approaches and examined with the statewide measure of writing performance. Results indicated that the validity and diagnostic accuracy for the best performing WE-CBM metric, correct minus incorrect word sequences (CIWS) and the automated approaches to scoring were comparable with automated approaches offering potentially improved feasibility for use in screening. Averaging scores across three time points was necessary, however, in order to achieve improved validity and adequate levels of diagnostic accuracy across the scoring approaches. Limitations, implications, and directions for future research regarding the use of automated scoring approaches for screening are discussed.


2021 ◽  
pp. 002221942199710
Author(s):  
Christine A. Espin ◽  
Roxette M. van den Bosch ◽  
Marijke van der Liende ◽  
Ralph C. A. Rippe ◽  
Melissa Beutick ◽  
...  

The purpose of this study was to examine the amount of attention devoted to data-based decision-making in Curriculum-Based Measurement (CBM) professional development materials. Sixty-nine CBM instructional sources were reviewed, including 45 presentations, 22 manuals, and two books. The content of the presentations and manuals/books was coded into one of four categories: (a) general CBM information, (b) conducting CBM, (c) data-based decision-making, and (d) other. Results revealed that only a small proportion of information in the CBM instructional materials was devoted to data-based decision-making (12% for presentations and 14% for manuals/books), and that this proportion was significantly smaller than (a) that devoted to other instructional topics, (b) that expected were information to be equally distributed across major instructional topics, and (c) that recommended by experienced CBM trainers. Results suggest a need for increased attention to data-based decision-making in CBM professional development.


2021 ◽  
pp. 082957352098775
Author(s):  
Sterett H. Mercer ◽  
Joanna E. Cannon ◽  
Bonita Squires ◽  
Yue Guo ◽  
Ella Pinco

We examined the extent to which automated written expression curriculum-based measurement (aWE-CBM) can be accurately used to computer score student writing samples for screening and progress monitoring. Students ( n = 174) with learning difficulties in Grades 1 to 12 who received 1:1 academic tutoring through a community-based organization completed narrative writing samples in the fall and spring across two academic years. The samples were evaluated using four automated and hand-calculated WE-CBM scoring metrics. Results indicated automated and hand-calculated scores were highly correlated at all four timepoints for counts of total words written ( rs = 1.00), words spelled correctly ( rs = .99–1.00), correct word sequences (CWS; rs = .96–.97), and correct minus incorrect word sequences (CIWS; rs = .86–.92). For CWS and CIWS, however, automated scores systematically overestimated hand-calculated scores, with an unacceptable amount of error for CIWS for some types of decisions. These findings provide preliminary evidence that aWE-CBM can be used to efficiently score narrative writing samples, potentially improving the feasibility of implementing multi-tiered systems of support in which the written expression skills of large numbers of students are screened and monitored.


2020 ◽  
Author(s):  
Sterett Mercer ◽  
Joanna Cannon ◽  
Bonita Squires ◽  
Yue Guo ◽  
Ella Pinco

We examined the extent to which automated written expression curriculum-based measurement (aWE-CBM) can be accurately used to computer score student writing samples for screening and progress monitoring. Students (n = 174) with learning difficulties in Grades 1–12 who received 1:1 academic tutoring through a community-based organization completed narrative writing samples in the fall and spring across two academic years. The samples were evaluated using four automated and hand-calculated WE-CBM scoring metrics. Results indicated automated and hand-calculated scores were highly correlated at all four timepoints for counts of total words written (rs = 1.00), words spelled correctly (rs = .99 – 1.00), correct word sequences (CWS; rs = .96 – .97), and correct minus incorrect word sequences (CIWS; rs = .86 – .92). For CWS and CIWS, however, automated scores systematically overestimated hand-calculated scores, with an unacceptable amount of error for CIWS for some types of decisions. These findings provide preliminary evidence that aWE-CBM can be used to efficiently score narrative writing samples, potentially improving the feasibility of implementing multi-tiered systems of support in which the written expression skills of large numbers of students are screened and monitored.


2020 ◽  
pp. 153450842097845
Author(s):  
Sarah J. Conoyer ◽  
William J. Therrien ◽  
Kristen K. White

Meta-analysis was used to examine curriculum-based measurement in the content areas of social studies and science. Nineteen studies between the years of 1998 and 2020 were reviewed to determine overall mean correlation for criterion validity and examine alternate-form reliability and slope coefficients. An overall mean correlation of .59 was found for criterion validity; however, there was significant heterogeneity across studies suggesting curriculum-based measure (CBM) format or content area may affect findings. Low to high alternative form reliability correlation coefficients were reported across CBM formats between .21 and .89. Studies investigating slopes included mostly vocabulary-matching formats and reported a range from .12 to .65 correct items per week with a mean of .34. Our findings suggest that additional research in the development of these measures in validity, reliability, and slope is warranted.


Sign in / Sign up

Export Citation Format

Share Document