scholarly journals A Deterministic Learning Algorithm Estimating the Q-Matrix for Cognitive Diagnosis Models

Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3062
Author(s):  
Meng-Ta Chung ◽  
Shui-Lien Chen

The goal of an exam in cognitive diagnostic assessment is to uncover whether an examinee has mastered certain attributes. Different cognitive diagnosis models (CDMs) have been developed for this purpose. The core of these CDMs is the Q-matrix, which is an item-to-attribute mapping, traditionally designed by domain experts. An expert designed Q-matrix is not without issues. For example, domain experts might neglect some attributes or have different opinions about the inclusion of some entries in the Q-matrix. It is therefore of practical importance to develop an automated method to estimate the Q-matrix. This research proposes a deterministic learning algorithm for estimating the Q-matrix. To obtain a sensible binary Q-matrix, a dichotomizing method is also devised. Results from the simulation study shows that the proposed method for estimating the Q-matrix is useful. The empirical study analyzes the ECPE data. The estimated Q-matrix is compared with the expert-designed one. All analyses in this research are carried out in R.

2019 ◽  
Vol 79 (4) ◽  
pp. 727-753 ◽  
Author(s):  
Pablo Nájera ◽  
Miguel A. Sorrel ◽  
Francisco José Abad

Cognitive diagnosis models (CDMs) are latent class multidimensional statistical models that help classify people accurately by using a set of discrete latent variables, commonly referred to as attributes. These models require a Q-matrix that indicates the attributes involved in each item. A potential problem is that the Q-matrix construction process, typically performed by domain experts, is subjective in nature. This might lead to the existence of Q-matrix misspecifications that can lead to inaccurate classifications. For this reason, several empirical Q-matrix validation methods have been developed in the recent years. de la Torre and Chiu proposed one of the most popular methods, based on a discrimination index. However, some questions related to the usefulness of the method with empirical data remained open due the restricted number of conditions examined, and the use of a unique cutoff point ( EPS) regardless of the data conditions. This article includes two simulation studies to test this validation method under a wider range of conditions, with the purpose of providing it with a higher generalization, and to empirically determine the most suitable EPS considering the data conditions. Results show a good overall performance of the method, the relevance of the different studied factors, and that using a single indiscriminate EPS is not acceptable. Specific guidelines for selecting an appropriate EPS are provided in the discussion.


2018 ◽  
Vol 44 (1) ◽  
pp. 3-24 ◽  
Author(s):  
Steven Andrew Culpepper ◽  
Yinghan Chen

Exploratory cognitive diagnosis models (CDMs) estimate the Q matrix, which is a binary matrix that indicates the attributes needed for affirmative responses to each item. Estimation of Q is an important next step for improving classifications and broadening application of CDMs. Prior research primarily focused on an exploratory version of the restrictive deterministic-input, noisy-and-gate model, and research is needed to develop exploratory methods for more flexible CDMs. We consider Bayesian methods for estimating an exploratory version of the more flexible reduced reparameterized unified model (rRUM). We show that estimating the rRUM Q matrix is complicated by a confound between elements of Q and the rRUM item parameters. A Bayesian framework is presented that accurately recovers Q using a spike–slab prior for item parameters to select the required attributes for each item. We present Monte Carlo simulation studies, demonstrating the developed algorithm improves upon prior Bayesian methods for estimating the rRUM Q matrix. We apply the developed method to the Examination for the Certificate of Proficiency in English data set. The results provide evidence of five attributes with a partially ordered attribute hierarchy.


SAGE Open ◽  
2019 ◽  
Vol 9 (1) ◽  
pp. 215824401983268 ◽  
Author(s):  
Ragip Terzi ◽  
Sedat Sen

Large-scale assessments are generally designed for summative purposes to compare achievement among participating countries. However, these nondiagnostic assessments have also been adapted in the context of cognitive diagnostic assessment for diagnostic purposes. Following the large amount of investments in these assessments, it would be cost-effective to draw finer-grained inferences about the attribute mastery. Nonetheless, the correctness of attribute specifications in the Q-matrix has not been verified, despite being designed by domain experts. Furthermore, the underlying process of TIMSS (Trends in International Mathematics and Science Study) assessment is unknown as it was not developed for diagnostic purposes. Thus, this study suggests an initial validating attribute specifications in the Q-matrix and thereafter defining specific reduced or saturated models for each item. In doing so, the two analyses were validated across 20 countries that were selected randomly for TIMSS 2011 data. Results show that attribute specifications can differ from expert opinions and the underlying model for each item can vary.


2017 ◽  
Vol 43 (1) ◽  
pp. 88-115 ◽  
Author(s):  
Michel Philipp ◽  
Carolin Strobl ◽  
Jimmy de la Torre ◽  
Achim Zeileis

Cognitive diagnosis models (CDMs) are an increasingly popular method to assess mastery or nonmastery of a set of fine-grained abilities in educational or psychological assessments. Several inference techniques are available to quantify the uncertainty of model parameter estimates, to compare different versions of CDMs, or to check model assumptions. However, they require a precise estimation of the standard errors (or the entire covariance matrix) of the model parameter estimates. In this article, it is shown analytically that the currently widely used form of calculation leads to underestimated standard errors because it only includes the item parameters but omits the parameters for the ability distribution. In a simulation study, we demonstrate that including those parameters in the computation of the covariance matrix consistently improves the quality of the standard errors. The practical importance of this finding is discussed and illustrated using a real data example.


Psych ◽  
2021 ◽  
Vol 3 (4) ◽  
pp. 812-835
Author(s):  
Qingzhou Shi ◽  
Wenchao Ma ◽  
Alexander Robitzsch ◽  
Miguel A. Sorrel ◽  
Kaiwen Man

Cognitive diagnosis models (CDMs) have increasingly been applied in education and other fields. This article provides an overview of a widely used CDM, namely, the G-DINA model, and demonstrates a hands-on example of using multiple R packages for a series of CDM analyses. This overview involves a step-by-step illustration and explanation of performing Q-matrix evaluation, CDM calibration, model fit evaluation, item diagnosticity investigation, classification reliability examination, and the result presentation and visualization. Some limitations of conducting CDM analysis in R are also discussed.


Methodology ◽  
2014 ◽  
Vol 10 (3) ◽  
pp. 100-107 ◽  
Author(s):  
Jürgen Groß ◽  
Ann Cathrice George

When a psychometric test has been completed by a number of examinees, an afterward analysis of required skills or attributes may improve the extraction of diagnostic information. Relying upon the retrospectively specified item-by-attribute matrix, such an investigation may be carried out by classifying examinees into latent classes, consisting of subsets of required attributes. Specifically, various cognitive diagnosis models may be applied to serve this purpose. In this article it is shown that the permission of all possible attribute combinations as latent classes can have an undesired effect in the classification process, and it is demonstrated how an appropriate elimination of specific classes may improve the classification results. As an easy example, the popular deterministic input, noisy “and” gate (DINA) model is applied to Tatsuoka’s famous fraction subtraction data, and results are compared to current discussions in the literature.


Sign in / Sign up

Export Citation Format

Share Document