scholarly journals Using Automatic Item Generation to Create Solutions and Rationales for Computerized Formative Testing

2017 ◽  
Vol 42 (1) ◽  
pp. 42-57 ◽  
Author(s):  
Mark J. Gierl ◽  
Hollis Lai

Computerized testing provides many benefits to support formative assessment. However, the advent of computerized formative testing has also raised formidable new challenges, particularly in the area of item development. Large numbers of diverse, high-quality test items are required because items are continuously administered to students. Hence, hundreds of items are needed to develop the banks necessary for computerized formative testing. One promising approach that may be used to address this test development challenge is automatic item generation. Automatic item generation is a relatively new but rapidly evolving research area where cognitive and psychometric modeling practices are used to produce items with the aid of computer technology. The purpose of this study is to describe a new method for generating both the items and the rationales required to solve the items to produce the required feedback for computerized formative testing. The method for rationale generation is demonstrated and evaluated in the medical education domain.

2016 ◽  
Vol 37 (3) ◽  
pp. 39-61 ◽  
Author(s):  
Mark J. Gierl ◽  
Hollis Lai

Testing agencies require large numbers of high-quality items that are produced in a cost-effective and timely manner. Increasingly, these agencies also require items in different languages. In this paper we present a methodology for multilingual automatic item generation (AIG). AIG is the process of using item models to generate test items with the aid of computer technology. We describe a three-step AIG approach where, first, test development specialists identify the content that will be used for item generation. Next, the specialists create item models to specify the content in the assessment task that must be manipulated to produce new items. Finally, elements in the item model are manipulated with computer algorithms to produce new items. Language is added in the item model step to permit multilingual AIG. We illustrate our method by generating 360 English and 360 French medical education items. The importance of item banking in multilingual test development is also discussed.


Author(s):  
Mark Gierl ◽  
Hollis Lai ◽  
Xinxin Zhang

Changes to the design and development of educational tests are resulting in the unprecedented demand for a large supply of content-specific test items. One way to address this growing demand is with automatic item generation. Automatic item generation is the process of using models to create test items with the aid of computer technology. The purpose of this chapter is to describe and illustrate a method for generating test items. The method is also illustrated using an example from the medical health sciences.


Author(s):  
Mark Gierl ◽  
Hollis Lai ◽  
Xinxin Zhang

Changes to the design and development of educational tests are resulting in the unprecedented demand for a large supply of content-specific test items. One way to address this growing demand is with automatic item generation. Automatic item generation is the process of using models to create test items with the aid of computer technology. The purpose of this chapter is to describe and illustrate a method for generating test items. The method is also illustrated using an example from the medical health sciences.


Author(s):  
Mark Gierl ◽  
Okan Bulut ◽  
Xinxin Zhang

Computerized testing provides many benefits to support formative assessment in higher education. However, the advent of computerized formative testing has raised daunting new challenges, particularly in the areas of item development and test construction. Large numbers of items are required because they are continuously administered to students. Automatic item generation is a relatively new but rapidly evolving assessment technology that may be used to address this challenge. Once the items are generated, tests must be assembled that measure the same content areas with the same difficulty level using different sets of items. Automated test assembly is an assessment technology that may be used to address this challenge. To date, the use of automated methods for item development and test construction has been limited. The purpose of this chapter is to address these limitations by describing and illustrating how recent advances in the technology of assessment can be used to permit computerized formative testing to promote personalized learning.


Author(s):  
Hollis Lai ◽  
Mark Gierl

Increasing demand for knowledge of our workers has prompted the increase in assessments and providing feedback to facilitate their learning. This and the increasingly computerized assessments require new test items beyond the ability for content specialists to produce them in a feasible fashion. Automatic item generation is a promising method that has begun to demonstrate utility in its application. The purpose of this chapter is to describe how AIG can be used to generate test items using the selected-response (i.e., multiple-choice) format. To ensure our description is both concrete and practical, we illustrate template-based item generation using an example from the complex problem-solving domain of the medical health sciences. The chapter is concluded with a description of the two directions for future research.


2021 ◽  
Author(s):  
Mark J. Gierl ◽  
Hollis Lai ◽  
Vasily Tanygin

2016 ◽  
Vol 80 (3) ◽  
pp. 339-347 ◽  
Author(s):  
Hollis Lai ◽  
Mark J. Gierl ◽  
B. Ellen Byrne ◽  
Andrew I. Spielman ◽  
David M. Waldschmidt

2020 ◽  
pp. 016327872090891
Author(s):  
Eric Shappell ◽  
Gregory Podolej ◽  
James Ahn ◽  
Ara Tekian ◽  
Yoon Soo Park

Mastery learning assessments have been described in simulation-based educational interventions; however, studies applying mastery learning to multiple-choice tests (MCTs) are lacking. This study investigates an approach to item generation and standard setting for mastery learning MCTs and evaluates the consistency of learner performance across sequential tests. Item models, variables for question stems, and mastery standards were established using a consensus process. Two test forms were created using item models. Tests were administered at two training programs. The primary outcome, the test–retest consistency of pass–fail decisions across versions of the test, was 94% (κ = .54). Decision-consistency classification was .85. Item-level consistency was 90% (κ = .77, SE = .03). These findings support the use of automatic item generation to create mastery MCTs which produce consistent pass–fail decisions. This technique broadens the range of assessment methods available to educators that require serial MCT testing, including mastery learning curricula.


Sign in / Sign up

Export Citation Format

Share Document