scholarly journals Applying Genetic Algorithm to Generation of High-Dimensional Item Response Data

2015 ◽  
Vol 2015 ◽  
pp. 1-13
Author(s):  
ByoungWook Kim ◽  
JaMee Kim ◽  
WonGyu Lee

The item response data is thenm-dimensional data based on the responses made bymexaminees to the questionnaire consisting ofnitems. It is used to estimate the ability of examinees and item parameters in educational evaluation. For estimates to be valid, the simulation input data must reflect reality. This paper presents the effective combination of the genetic algorithm (GA) and Monte Carlo methods for the generation of item response data as simulation input data similar to real data. To this end, we generated four types of item response data using Monte Carlo and the GA and evaluated how similarly the generated item response data represents the real item response data with the item parameters (item difficulty and discrimination). We adopt two types of measurement, which are root mean square error and Kullback-Leibler divergence, for comparison of item parameters between real data and four types of generated data. The results show that applying the GA to initial population generated by Monte Carlo is the most effective in generating item response data that is most similar to real item response data. This study is meaningful in that we found that the GA contributes to the generation of more realistic simulation input data.

2020 ◽  
Author(s):  
Jiawei LI ◽  
Tad Gonsalves

This paper presents a Genetic Algorithm approach to solve a specific examination timetabling problem which is common in Japanese Universities. The model is programmed in Excel VBA programming language, which can be run on the Microsoft Office Excel worksheets directly. The model uses direct chromosome representation. To satisfy hard and soft constraints, constraint-based initialization operation, constraint-based crossover operation and penalty points system are implemented. To further improve the result quality of the algorithm, this paper designed an improvement called initial population pre-training. The proposed model was tested by the real data from Sophia University, Tokyo, Japan. The model shows acceptable results, and the comparison of results proves that the initial population pre-training approach can improve the result quality.


2020 ◽  
Vol 44 (5) ◽  
pp. 362-375
Author(s):  
Tyler Strachan ◽  
Edward Ip ◽  
Yanyan Fu ◽  
Terry Ackerman ◽  
Shyh-Huei Chen ◽  
...  

As a method to derive a “purified” measure along a dimension of interest from response data that are potentially multidimensional in nature, the projective item response theory (PIRT) approach requires first fitting a multidimensional item response theory (MIRT) model to the data before projecting onto a dimension of interest. This study aims to explore how accurate the PIRT results are when the estimated MIRT model is misspecified. Specifically, we focus on using a (potentially misspecified) two-dimensional (2D)-MIRT for projection because of its advantages, including interpretability, identifiability, and computational stability, over higher dimensional models. Two large simulation studies (I and II) were conducted. Both studies examined whether the fitting of a 2D-MIRT is sufficient to recover the PIRT parameters when multiple nuisance dimensions exist in the test items, which were generated, respectively, under compensatory MIRT and bifactor models. Various factors were manipulated, including sample size, test length, latent factor correlation, and number of nuisance dimensions. The results from simulation studies I and II showed that the PIRT was overall robust to a misspecified 2D-MIRT. Smaller third and fourth simulation studies were done to evaluate recovery of the PIRT model parameters when the correctly specified higher dimensional MIRT or bifactor model was fitted with the response data. In addition, a real data set was used to illustrate the robustness of PIRT.


Processes ◽  
2020 ◽  
Vol 8 (5) ◽  
pp. 513
Author(s):  
Elisabete Alberdi ◽  
Leire Urrutia ◽  
Aitor Goti ◽  
Aitor Oyarbide-Zubillaga

Calculating adequate vehicle routes for collecting municipal waste is still an unsolved issue, even though many solutions for this process can be found in the literature. A gap still exists between academics and practitioners in the field. One of the apparent reasons why this rift exists is that academic tools often are not easy to handle and maintain by actual users. In this work, the problem of municipal waste collection is modeled using a simple but efficient and especially easy to maintain solution. Real data have been used, and it has been solved using a Genetic Algorithm (GA). Computations have been done in two different ways: using a complete random initial population, and including a seed in this initial population. In order to guarantee that the solution is efficient, the performance of the genetic algorithm has been compared with another well-performing algorithm, the Variable Neighborhood Search (VNS). Three problems of different sizes have been solved and, in all cases, a significant improvement has been obtained. A total reduction of 40% of itineraries is attained with the subsequent reduction of emissions and costs.


1993 ◽  
Vol 18 (1) ◽  
pp. 41-68 ◽  
Author(s):  
Ratna Nandakumar ◽  
William Stout

This article provides a detailed investigation of Stout’s statistical procedure (the computer program DIMTEST) for testing the hypothesis that an essentially unidimensional latent trait model fits observed binary item response data from a psychological test. One finding was that DIMTEST may fail to perform as desired in the presence of guessing when coupled with many high-discriminating items. A revision of DIMTEST is proposed to overcome this limitation. Also, an automatic approach is devised to determine the size of the assessment subtests. Further, an adjustment is made on the estimated standard error of the statistic on which DIMTEST depends. These three refinements have led to an improved procedure that is shown in simulation studies to adhere closely to the nominal level of signficance while achieving considerably greater power. Finally, DIMTEST is validated on a selection of real data sets.


1986 ◽  
Vol 11 (2) ◽  
pp. 91-115 ◽  
Author(s):  
David A. Harrison

Multidimensional item response data were created from a hierarchical factor model under a variety of conditions. The strength of a second-order general factor, the number of first-order common factors, the distribution of items loading on those common factors, and the number of items in simulated tests were systematically manipulated. The computer program LOGIST effectively recovered both item parameters and trait parameters implied by the general factor in nearly all of the experimental conditions. Implications of these findings for computerized adaptive testing, investigations of item bias, and other applications of item response theory models are discussed.


Author(s):  
CARIAPPA M.M ◽  
MYDHILI .K. NAIR

Rough set theory is a very efficient tool for imperfect data analysis, especially to resolve ambiguities, classify raw data and generate rules based on the input data. It can be applied to multiple domains such as banking, medicine etc., wherever it is essential to make decisions dynamically and generate appropriate rules. In this paper, we have focused on the travel and tourism domain, specifically, Web-based applications, whose business processes are run by Web Services. At present, the trend is towards deploying business processes as composed web services, thereby providing value-added services to the application developers, who consumes these composed services. In this paper, we have used Genetic Algorithm (GA), an evolutionary computing technique, for composing web services. GA suffers from the innate problem of larger execution time when the initial population (input data) is high, as well as lower hit rate (success rate). In this paper, we present implementation results of a new technique of solving this problem by applying two key concepts of rough set theory, namely, lower and upper approximation and equivalence class to generate if-then decision support rules, which will restrict the initial population of web services given to the genetic algorithm for composition.


2002 ◽  
Vol 27 (4) ◽  
pp. 341-384 ◽  
Author(s):  
Richard J. Patz ◽  
Brian W. Junker ◽  
Matthew S. Johnson ◽  
Louis T. Mariano

Open-ended or “constructed” student responses to test items have become a stock component of standardized educational assessments. Digital imaging of examinee work now enables a distributed rating process to be flexibly managed, and allocation designs that involve as many as six or more ratings for a subset of responses are now feasible. In this article we develop Patz’s (1996) hierarchical rater model (HRM) for polytomous item response data scored by multiple raters, and show how it can be used to scale examinees and items, to model aspects of consensus among raters, and to model individual rater severity and consistency effects. The HRM treats examinee responses to open-ended items as unobsered discrete varibles, and it explicitly models the “proficiency” of raters in assigning accurate scores as well as the proficiency of examinees in providing correct responses. We show how the HRM “fits in” to the generalizability theory framework that has been the traditional tool of analysis for rated item response data, and give some relationships between the HRM, the design effects correction of Bock, Brennan and Muraki (1999), and the rater bundle model of Wilson and Hoskens (2002). Using simulated and real data, we compare the HRM to the conventional IRT Facets model for rating data (e.g., Linacre, 1989; Engelhard, 1994, 1996), and we explore ways that information from HRM analyses may improved the quality of the rating process.


Methodology ◽  
2006 ◽  
Vol 2 (4) ◽  
pp. 142-148 ◽  
Author(s):  
Pere J. Ferrando

In the IRT person-fluctuation model, the individual trait levels fluctuate within a single test administration whereas the items have fixed locations. This article studies the relations between the person and item parameters of this model and two central properties of item and test scores: temporal stability and external validity. For temporal stability, formulas are derived for predicting and interpreting item response changes in a test-retest situation on the basis of the individual fluctuations. As for validity, formulas are derived for obtaining disattenuated estimates and for predicting changes in validity in groups with different levels of fluctuation. These latter formulas are related to previous research in the person-fit domain. The results obtained and the relations discussed are illustrated with an empirical example.


Sign in / Sign up

Export Citation Format

Share Document