scholarly journals Item Selection on The Moodle-Based Computerized Adaptive Test

2021 ◽  
Vol 2111 (1) ◽  
pp. 012033
Author(s):  
Haryanto ◽  
Y Neng-Shu ◽  
S Hadi ◽  
M Ali ◽  
AF Husna ◽  
...  

Abstract In this industrial era, all areas of life have been entered. There are five central issues that support performance, namely numerical physical devices, production tools, programs, interfaces, and networks. The Modular Object-Oriented Dynamic Learning Environment (Moodle) in order to support Industrial technology has been equipped with adaptive test facilities. Adaptive Moodle can be used to organize a communicative and interactive test process because of the communication features (chat, messaging, or forum). In addition, Adaptive Moodle can be used to administer online tests. Adaptive tests are tests whose quiz presentations will adjust to the user’s abilities. The results of the research that has been carried out regarding the selection of items on the Moodle-based computerized adaptive test (CAT) were obtained: (1) The Moodle adaptive test worked successfully in accordance with the research objectives, (2) Based on user responses, the selection of items on the Moodle adaptive test items had worked well for the exam, (3) The Moodle adaptive test can run according to its function, namely adaptive to the user’s ability.

2017 ◽  
Vol 41 (7) ◽  
pp. 495-511 ◽  
Author(s):  
Guangming Ling ◽  
Yigal Attali ◽  
Bridgid Finn ◽  
Elizabeth A. Stone

Computer adaptive tests provide important measurement advantages over traditional fixed-item tests, but research on the psychological reactions of test takers to adaptive tests is lacking. In particular, it has been suggested that test-taker engagement, and possibly test performance as a consequence, could benefit from the control that adaptive tests have on the number of test items examinees answer correctly. However, previous research on this issue found little support for this possibility. This study expands on previous research by examining this issue in the context of a mathematical ability assessment and by considering the possible effect of immediate feedback of response correctness on test engagement, test anxiety, time on task, and test performance. Middle school students completed a mathematics assessment under one of three test type conditions (fixed, adaptive, or easier adaptive) and either with or without immediate feedback about the correctness of responses. Results showed little evidence for test type effects. The easier adaptive test resulted in higher engagement and lower anxiety than either the adaptive or fixed-item tests; however, no significant differences in performance were found across test types, although performance was significantly higher across all test types when students received immediate feedback. In addition, these effects were not related to ability level, as measured by the state assessment achievement levels. The possibility that test experiences in adaptive tests may not in practice be significantly different than in fixed-item tests is raised and discussed to explain the results of this and previous studies.


2021 ◽  
pp. 014662162110146
Author(s):  
Justin L. Kern ◽  
Edison Choe

This study investigates using response times (RTs) with item responses in a computerized adaptive test (CAT) setting to enhance item selection and ability estimation and control for differential speededness. Using van der Linden’s hierarchical framework, an extended procedure for joint estimation of ability and speed parameters for use in CAT is developed following van der Linden; this is called the joint expected a posteriori estimator (J-EAP). It is shown that the J-EAP estimate of ability and speededness outperforms the standard maximum likelihood estimator (MLE) of ability and speededness in terms of correlation, root mean square error, and bias. It is further shown that under the maximum information per time unit item selection method (MICT)—a method which uses estimates for ability and speededness directly—using the J-EAP further reduces average examinee time spent and variability in test times between examinees above the resulting gains of this selection algorithm with the MLE while maintaining estimation efficiency. Simulated test results are further corroborated with test parameters derived from a real data example.


SAINTEKBU ◽  
2016 ◽  
Vol 9 (1) ◽  
Author(s):  
Aslam Fatkhudin ◽  
M. Fikri Hidayatullah

One of the computer-based testing is the Computerized Adaptive Test (CAT), which is a computer-based testing system where the items were given to the participants adapted to test the ability of the participants. Assessment methods are usually applied in CAT is Item Response Theory (IRT). IRT models are most commonly used today is the model 3 Parameter Logistic (3PL), which is about the discrimination, difficulty and guessing. However 3PL IRT models have not provided information more objectively test the ability of participants. The opinion of the test participants were tested items were also to be considered. In this study using CAT in combination with IRT model of 4PL.In this research, the development of CAT which uses about 4 parameters, namely the discrimination, difficulty, guessing and questionnaires. The questions used were about UAS 1 English subjects. Samples were taken from 40 students answer with the best value of the total 172 students spread across 6 classes to measure the parameter estimation problem. Further testing using CAT application 4PL IRT models compared to CAT 3PL IRT models.From research done shows that the CAT application combined with IRT models 4PL can measure the ability of the test taker shorter or faster and also opportunities participants correctly answered the test items was done tend to be better than the 3PL IRT models. Keywords : Ability, CAT, IRT, 3PL, 4PL, Probability, Test 


1995 ◽  
Vol 13 (2) ◽  
pp. 151-162 ◽  
Author(s):  
Mary E. Lunz ◽  
Betty Bergstrom

Computerized adaptive testing (CAT) uses a computer algorithm to construct and score the best possible individualized or tailored tests for each candidate. The computer also provides an absolute record of all responses and changes to responses, as well as their effects on candidate performance. The detail of the data from computerized adaptive tests makes it possible to track initial responses and response alterations, and their effect on candidate estimated ability measures, as well as the statistical performance of the examination. The purpose of this study was to track the effect of candidate response patterns on a computerized adaptive test. A ninety-item certification examination was divided into nine units of ten items each to track the pattern of initial responses and response alterations on ability estimates and test precision across the nine test units. The precision of the test was affected most by response alterations during early segments of the test. While generally, candidates benefit from altering responses, individual candidates showed different patterns of response alterations across test segments. Test precision is minimally affected, suggesting that the tailoring of CAT is minimally affected by response alterations.


2019 ◽  
Vol 79 (6) ◽  
pp. 1133-1155
Author(s):  
Emre Gönülateş

This article introduces the Quality of Item Pool (QIP) Index, a novel approach to quantifying the adequacy of an item pool of a computerized adaptive test for a given set of test specifications and examinee population. This index ranges from 0 to 1, with values close to 1 indicating the item pool presents optimum items to examinees throughout the test. This index can be used to compare different item pools or diagnose the deficiencies of a given item pool by quantifying the amount of deviation from a perfect item pool. Simulation studies were conducted to evaluate the capacity of this index for detecting the inadequacies of two simulated item pools. The value of this index was compared with the existing methods of evaluating the quality of computerized adaptive tests (CAT). Results of the study showed that the QIP Index can detect even slight deviations between a proposed item pool and an optimal item pool. It can also uncover shortcomings of an item pool that other outcomes of CAT cannot detect. CAT developers can use the QIP Index to diagnose the weaknesses of the item pool and as a guide for improving item pools.


1984 ◽  
Vol 1 (4) ◽  
pp. 296-314 ◽  
Author(s):  
Joseph P. Winnick ◽  
Francis X. Short

In order to enhance the physical fitness development of individuals with selected handicapping conditions. Winnick and Short (1984b) published a manual which presented the Project UNIQUE Physical Fitness Test and training program. This article presents criteria and supporting technical information pertaining to the selection of test items.


Methodology ◽  
2018 ◽  
Vol 14 (4) ◽  
pp. 177-188 ◽  
Author(s):  
Martin Schultze ◽  
Michael Eid

Abstract. In the construction of scales intended for the use in cross-cultural studies, the selection of items needs to be guided not only by traditional criteria of item quality, but has to take information about the measurement invariance of the scale into account. We present an approach to automated item selection which depicts the process as a combinatorial optimization problem and aims at finding a scale which fulfils predefined target criteria – such as measurement invariance across cultures. The search for an optimal solution is performed using an adaptation of the [Formula: see text] Ant System algorithm. The approach is illustrated using an application to item selection for a personality scale assuming measurement invariance across multiple countries.


Sign in / Sign up

Export Citation Format

Share Document