automated test assembly
Recently Published Documents


TOTAL DOCUMENTS

29
(FIVE YEARS 7)

H-INDEX

8
(FIVE YEARS 0)

Psych ◽  
2021 ◽  
Vol 3 (2) ◽  
pp. 96-112
Author(s):  
Benjamin Becker ◽  
Dries Debeer ◽  
Karoline A. Sachse ◽  
Sebastian Weirich

Combining items from an item pool into test forms (test assembly) is a frequent task in psychological and educational testing. Although efficient methods for automated test assembly exist, these are often unknown or unavailable to practitioners. In this paper we present the R package eatATA, which allows using several mixed-integer programming solvers for automated test assembly in R. We describe the general functionality and the common work flow of eatATA using a minimal example. We also provide four more elaborate use cases of automated test assembly: (a) The assembly of multiple test forms for a pilot study; (b) the assembly of blocks of items for a multiple matrix booklet design in the context of a large-scale assessment; (c) the assembly of two linear test forms for individual diagnostic purposes; (d) the assembly of multi-stage testing modules for individual diagnostic purposes. All use cases are accompanied with example item pools and commented R code.


Assessment ◽  
2021 ◽  
pp. 107319112110001
Author(s):  
Manuel Martín-Fernández ◽  
Enrique Gracia ◽  
Marisol Lila

Attitudes of acceptability of intimate partner violence against women (IPVAW) are considered one of the main risk factors of this type of violence. The aim of this study is to develop and validate a short version of the acceptability of IPVAW scale, the A-IPVAW-8, for large scale studies where space and time are limited. A panel of experts were asked to assess item content validity. Two samples were recruited to assemble an 8-item short version of the scale using automated test assembly, and to reassess the psychometric properties of the A-IPVAW-8 in an independent sample. Results showed that the A-IPVAW-8 had adequate internal consistency (α = .72-.76, ω = .73-.81), a stable one-factor latent structure (comparative fit index [CFI] = 0.94, Tucker–Lewis index = 0.92, root mean square error of approximation = 0.077), validity evidences based on its relationships to other variables in both samples, and was also invariant across gender (ΔCFI < |0.02|). This study provides a short, easy-to-use tool to evaluate attitudes of acceptability of IPVAW for large scale studies.


Psych ◽  
2020 ◽  
Vol 2 (4) ◽  
pp. 315-337
Author(s):  
Giada Spaccapanico Proietti ◽  
Mariagiulia Matteucci ◽  
Stefania Mignani

In testing situations, automated test assembly (ATA) is used to assemble single or multiple test forms that share the same psychometric characteristics, given a set of specific constraints, by means of specific solvers. However, in complex situations, which are typical of large-scale assessments, ATA models may be infeasible due to the large number of decision variables and constraints involved in the problem. The purpose of this paper is to formalize a standard procedure and two different strategies—namely, additive and subtractive—for overcoming practical ATA concerns with large-scale assessments and to show their effectiveness in two case studies. The MAXIMIN and MINIMAX ATA methods are used to assemble multiple test forms based on item response theory models for binary data. The main results show that the additive strategy is able to identify the specific constraints that make the model infeasible, while the subtractive strategy is a faster but less accurate process, which may not always be optimal. Overall, the procedures are able to produce parallel test forms with similar measurement precision and contents, and they minimize the number of items shared among the test forms. Further research could be done to investigate the properties of the proposed approaches under more complex testing conditions, such as multi-stage testing, and to blend the proposed approaches in order to obtain the solution that satisfies the largest set of constraints.


2019 ◽  
Vol 44 (3) ◽  
pp. 219-233
Author(s):  
Can Shao ◽  
Silu Liu ◽  
Hongwei Yang ◽  
Tsung-Hsun Tsai

Mathematical programming has been widely used by professionals in testing agencies as a tool to automatically construct equivalent test forms. This study introduces the linear programming capabilities (modeling language plus solvers) of SAS Operations Research as a platform to rigorously engineer tests on specifications in an automated manner. To that end, real items from a medical licensing test are used to demonstrate the simultaneous assembly of multiple parallel test forms under two separate linear programming scenarios: (a) constraint satisfaction (one problem) and (b) combinatorial optimization (three problems). In the four problems from the two scenarios, the forms are assembled subjected to various content and psychometric constraints. Assembled forms are next assessed using psychometric methods to ensure equivalence about all test specifications. Results from this study support SAS as a reliable and easy-to-implement platform for form assembly. Annotated codes are provided to promote further research and operational work in this area.


Author(s):  
Mark Gierl ◽  
Okan Bulut ◽  
Xinxin Zhang

Computerized testing provides many benefits to support formative assessment in higher education. However, the advent of computerized formative testing has raised daunting new challenges, particularly in the areas of item development and test construction. Large numbers of items are required because they are continuously administered to students. Automatic item generation is a relatively new but rapidly evolving assessment technology that may be used to address this challenge. Once the items are generated, tests must be assembled that measure the same content areas with the same difficulty level using different sets of items. Automated test assembly is an assessment technology that may be used to address this challenge. To date, the use of automated methods for item development and test construction has been limited. The purpose of this chapter is to address these limitations by describing and illustrating how recent advances in the technology of assessment can be used to permit computerized formative testing to promote personalized learning.


Sign in / Sign up

Export Citation Format

Share Document