Michigan Educational Assessment Program and Curriculum-Based Measurement: Comparing Reading Performance

2002 ◽  
Author(s):  
Dawn van Dyke ◽  
Irene L. Middleton
2011 ◽  
Vol 71 (6) ◽  
pp. 909-924 ◽  
Author(s):  
George Engelhard

The purpose of this study is to describe a new approach for evaluating the judgments of standard-setting panelists within the context of the bookmark procedure. The bookmark procedure is widely used for setting performance standards on high-stakes assessments. A many-faceted Rasch (MFR) model is proposed for evaluating the bookmark judgments of the panelists, and its use illustrated with standard-setting judgments from the Michigan Educational Assessment Program (MEAP). Panelists set three performance standards to create four performance levels (Apprentice, Basic, Met, and Exceeded). The content area used to illustrate the model is mathematics in Grade 3. The analyses suggest that the MFR model provides a promising framework for examining bookmark judgments.


2015 ◽  
Vol 55 (2) ◽  
pp. 169
Author(s):  
Calantha Tillotson

Melissa Bowles-Terry and Cassandra Kvenild present Classroom Assessment Techniques for Librarians as a toolbox for instruction librarians seeking to create an assessment program in their academic library. Beginning by providing a basic introduction to educational assessment theory, Bowles-Terry and Kvenild build a foundation of understanding with their fellow instruction librarians regarding what assessment means and why it should be used in any library instruction program.


2002 ◽  
Vol 26 (1-2) ◽  
pp. 32-47
Author(s):  
Alison Madelaine ◽  
Kevin Wheldall

Teacher judgment (TJ) is frequently employed as the basis for selecting students in need of specialist help in reading. Two studies are presented in which TJ is compared with a quick alternative deriving from curriculum-based measurement (CBM) that has been shown to be both highly reliable and valid. In the first study, 32 teachers of year two to year six classes were required to categorise their students into the top 25%, middle 50% and bottom 25% for reading performance. Compared with categorisation based on the more objective CBM measure, the mean accuracy of TJ was 67%, varying between 29% and 100%. In the second study, 24 teachers of year one to year five classes were required to categorise ten randomly selected students from their classes into the top three, middle four and bottom three for reading performance. Similar results were obtained with mean accuracy of TJ at 65%, varying between 20% and 100%. Taken together, the findings of the two studies suggest that reliance on TJ for instructional decision-making may be misplaced and that a more objective, quick alternative based on CBM may be preferable.


1972 ◽  
Vol 3 (2) ◽  
pp. 67-68
Author(s):  
Donald B. Sension

Large-scale educational assessment programs are the vogue in education today. The National Assessment of Educational Progress represents a new, creative approach to the longitudinal assessment problem. In addition, approximately three-fourths of the fifty state departments of education maintain some sort of statewide testing or assessment program. Many of the statewide programs, unlike the National Assessment Project, have long traditional histories. The New Hampshire statewide testing program (Austin and Provost, JRME, J anuary 1972) is representative of these statewide programs.


Author(s):  
Alfons M. Strathmann ◽  
Karl Josef Klauer

Zusammenfassung. Am Beispiel des Rechnens in der Grundschule wird eine Weiterentwicklung des amerikanischen „Curriculum – based measurement” demonstriert. Ein ganzes Jahr lang erhalten 190 Kinder aus sieben Grundschulklassen und drei Sonderschulklassen alle zwei Wochen einen Rechentest. Bei den Tests handelt es sich um Zufallsstichproben aus Grundgesamtheiten von Aufgaben, die dem Lehrziel für jedes der Schuljahre entsprechend definiert sind. Für jedes Kind und jeden Termin wird eine eigene neue Zufallsstichprobe generiert, so dass kein Test zweimal gegeben wird, ein jeder aber die geforderte Fertigkeit kontentvalide erfasst. Solche Tests lassen sich als kriteriumsorientierte Binomialtests darstellen. Im vorliegenden Beitrag wird (1) das ursprüngliche Konzept und seine Weiterentwicklung kurz vorgestellt, (2) empirisch getestet, ob das neue Verfahren geeignet ist, von Klassenlehrern vertretbar eingesetzt zu werden, und (3) werden Ausblicke auf dringend erwünschte weiterführende Forschungen geboten. Die vorgelegten Daten erlauben, das Spektrum von Verläufen auf Klassen- wie Individualebene zu dokumentieren, aber auch, die Probleme und vielversprechenden Möglichkeiten des Ansatzes kritisch offen zu legen.


2000 ◽  
Vol 16 (2) ◽  
pp. 139-146 ◽  
Author(s):  
Padeliadu Susana ◽  
Georgios D. Sideridis

Abstract This study investigated the discriminant validation of the Test of Reading Performance (TORP), a new scale designed to evaluate the reading performance of elementary-school students. The sample consisted of 181 elementary-school students drawn from public elementary schools in northern Greece using stratified random procedures. The TORP was hypothesized to measure six constructs, namely: “letter knowledge,” “phoneme blending,” “word identification,” “syntax,” “morphology,” and “passage comprehension.” Using standard deviations (SD) from the mean, three groups of students were formed as follows: A group of low achievers in reading (N = 9) including students who scored between -1 and -1.5 SD from the mean of the group. A group of students at risk of reading difficulties (N = 6) including students who scored between -1.5 and -2 SDs below the mean of the group. A group of students at risk of serious reading difficulties (N = 6) including students who scored -2 or more SDs below the mean of the group. The rest of the students (no risk, N = 122) comprised the fourth group. Using discriminant analyses it was evaluated how well the linear combination of the 15 variables that comprised the TORP could discriminate students of different reading ability. Results indicated that correct classification rates for low achievers, those at risk for reading problems, those at risk of serious reading problems, and the no-risk group were 89%, 100%, 83%, and 97%, respectively. Evidence for partial validation of the TORP was provided through the use of confirmatory factor analysis and indices of sensitivity and specificity. It is concluded that the TORP can be ut ilized for the identification of children at risk for low achievement in reading. Analysis of the misclassified cases indicated that increased variability might have been responsible for the existing misclassification. More research is needed to determine the discriminant validation of TORP with samples of children with specific reading disabilities.


Sign in / Sign up

Export Citation Format

Share Document