Advanced Methods in Automatic Item Generation

2021 ◽  
Author(s):  
Mark J. Gierl ◽  
Hollis Lai ◽  
Vasily Tanygin
2016 ◽  
Vol 80 (3) ◽  
pp. 339-347 ◽  
Author(s):  
Hollis Lai ◽  
Mark J. Gierl ◽  
B. Ellen Byrne ◽  
Andrew I. Spielman ◽  
David M. Waldschmidt

2020 ◽  
pp. 016327872090891
Author(s):  
Eric Shappell ◽  
Gregory Podolej ◽  
James Ahn ◽  
Ara Tekian ◽  
Yoon Soo Park

Mastery learning assessments have been described in simulation-based educational interventions; however, studies applying mastery learning to multiple-choice tests (MCTs) are lacking. This study investigates an approach to item generation and standard setting for mastery learning MCTs and evaluates the consistency of learner performance across sequential tests. Item models, variables for question stems, and mastery standards were established using a consensus process. Two test forms were created using item models. Tests were administered at two training programs. The primary outcome, the test–retest consistency of pass–fail decisions across versions of the test, was 94% (κ = .54). Decision-consistency classification was .85. Item-level consistency was 90% (κ = .77, SE = .03). These findings support the use of automatic item generation to create mastery MCTs which produce consistent pass–fail decisions. This technique broadens the range of assessment methods available to educators that require serial MCT testing, including mastery learning curricula.


2009 ◽  
Vol 35 (2-3) ◽  
pp. 71-76 ◽  
Author(s):  
Heinz Holling ◽  
Jonas P. Bertling ◽  
Nina Zeuch

2016 ◽  
Vol 37 (3) ◽  
pp. 39-61 ◽  
Author(s):  
Mark J. Gierl ◽  
Hollis Lai

Testing agencies require large numbers of high-quality items that are produced in a cost-effective and timely manner. Increasingly, these agencies also require items in different languages. In this paper we present a methodology for multilingual automatic item generation (AIG). AIG is the process of using item models to generate test items with the aid of computer technology. We describe a three-step AIG approach where, first, test development specialists identify the content that will be used for item generation. Next, the specialists create item models to specify the content in the assessment task that must be manipulated to produce new items. Finally, elements in the item model are manipulated with computer algorithms to produce new items. Language is added in the item model step to permit multilingual AIG. We illustrate our method by generating 360 English and 360 French medical education items. The importance of item banking in multilingual test development is also discussed.


2016 ◽  
Vol 28 (2) ◽  
pp. 166-173 ◽  
Author(s):  
Hollis Lai ◽  
Mark J. Gierl ◽  
Claire Touchie ◽  
Debra Pugh ◽  
André-Philippe Boulais ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document