Automatic item generation of probability word problems

2009 ◽  
Vol 35 (2-3) ◽  
pp. 71-76 ◽  
Author(s):  
Heinz Holling ◽  
Jonas P. Bertling ◽  
Nina Zeuch
2021 ◽  
Author(s):  
Mark J. Gierl ◽  
Hollis Lai ◽  
Vasily Tanygin

2016 ◽  
Vol 80 (3) ◽  
pp. 339-347 ◽  
Author(s):  
Hollis Lai ◽  
Mark J. Gierl ◽  
B. Ellen Byrne ◽  
Andrew I. Spielman ◽  
David M. Waldschmidt

2020 ◽  
pp. 016327872090891
Author(s):  
Eric Shappell ◽  
Gregory Podolej ◽  
James Ahn ◽  
Ara Tekian ◽  
Yoon Soo Park

Mastery learning assessments have been described in simulation-based educational interventions; however, studies applying mastery learning to multiple-choice tests (MCTs) are lacking. This study investigates an approach to item generation and standard setting for mastery learning MCTs and evaluates the consistency of learner performance across sequential tests. Item models, variables for question stems, and mastery standards were established using a consensus process. Two test forms were created using item models. Tests were administered at two training programs. The primary outcome, the test–retest consistency of pass–fail decisions across versions of the test, was 94% (κ = .54). Decision-consistency classification was .85. Item-level consistency was 90% (κ = .77, SE = .03). These findings support the use of automatic item generation to create mastery MCTs which produce consistent pass–fail decisions. This technique broadens the range of assessment methods available to educators that require serial MCT testing, including mastery learning curricula.


2016 ◽  
Vol 37 (3) ◽  
pp. 39-61 ◽  
Author(s):  
Mark J. Gierl ◽  
Hollis Lai

Testing agencies require large numbers of high-quality items that are produced in a cost-effective and timely manner. Increasingly, these agencies also require items in different languages. In this paper we present a methodology for multilingual automatic item generation (AIG). AIG is the process of using item models to generate test items with the aid of computer technology. We describe a three-step AIG approach where, first, test development specialists identify the content that will be used for item generation. Next, the specialists create item models to specify the content in the assessment task that must be manipulated to produce new items. Finally, elements in the item model are manipulated with computer algorithms to produce new items. Language is added in the item model step to permit multilingual AIG. We illustrate our method by generating 360 English and 360 French medical education items. The importance of item banking in multilingual test development is also discussed.


2006 ◽  
Vol 27 (1) ◽  
pp. 2-14 ◽  
Author(s):  
Martin Arendasy ◽  
Markus Sommer ◽  
Georg Gittler ◽  
Andreas Hergovich

This paper deals with three studies on the computer-based, automatic generation of algebra word problems. The cognitive psychology based generative/quality control frameworks of the item generator are presented. In Study I the quality control framework is empirically tested using a first set of automatically generated items. Study II replicates the findings of Study I using a larger set of automatically generated algebra word problems. Study III deals with the generative framework of the item generator by testing construct validity aspects of the item generator produced items. Using nine Rasch-homogeneous subscales of the new intelligence structure battery (INSBAT, Hornke et al., 2004 ), a hierarchical confirmatory factor analysis is reported, which provides first evidence of convergent as well as divergent validity of the automatically generated items. The end of the paper discusses possible advantages of automatic item generation in general ranging from test security issues and the possibility of a more precise psychological assessment to mass testing and economical questions of test construction.


Sign in / Sign up

Export Citation Format

Share Document