scholarly journals Item Response Theory Models for the Fuzzy TOPSIS in the Analysis of Survey Data

Symmetry ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 223
Author(s):  
Bartłomiej Jefmański ◽  
Adam Sagan

The fuzzy TOPSIS (The Technique for Order of Preference by Similarity to Ideal Solution) is an attractive tool for measuring complex phenomena based on uncertain data. The original version of the method assumes that the object assessments in terms of the adopted criteria are expressed as triangular fuzzy numbers. One of the crucial stages of the fuzzy TOPSIS is selecting the fuzzy conversion scale, which is used to evaluate objects in terms of the adopted criteria. The choice of a fuzzy conversion scale may influence the results of the fuzzy TOPSIS. There is no uniform approach in constructing and selecting the fuzzy conversion scale for the fuzzy TOPSIS. The choice is subjective and made by researchers. Therefore, the aim of the article is to present a new, objective approach to the construction of fuzzy conversion scales based on Item Response Theory (IRT) models. The following models were used in the construction of fuzzy conversion scales: Polychoric Correlation Model (PM), Polytomous Rasch Model (PRM), Rating Scale Model (RSM), Partial Credit Model (PCM), Generalized Partial Credit Model (GPCM), Graded Response Model (GRM), Nominal Response Model (NRM). The usefulness of the proposed approach is presented on the example of the analysis of a survey’s results on measuring the quality of professional life of inhabitants of selected communes in Poland. The obtained results indicate that the choice of the fuzzy conversion scale has a large impact on the closeness coefficient values. A large difference was also observed in the spreads of triangular fuzzy numbers between scales based on IRT models and those used in the literature on the subject. The use of the fuzzy TOPSIS with fuzzy conversion scales built based on PRM, RSM, PCM, GPCM, and GRM models gives results with a greater range of variability than in the case of fuzzy conversion scales used in empirical research.

2017 ◽  
Vol 28 (67) ◽  
pp. 236
Author(s):  
Eduardo Vargas Ferreira ◽  
Caio Lucidius Naberezny Azevedo

<p>Este artigo aborda os mais importantes aspectos inferenciais do Modelo de Crédito Parcial Generalizado (MCPG), da Teoria da Resposta ao Item (TRI). É mostrado um estudo sobre uma das principais dificuldades encontradas no processo de estimação e inferência dos modelos da TRI, que é a falta de identificabilidade. Além disso, apresenta-se a interpretação dos parâmetros do modelo e da função de informação do item e do teste.</p><p><strong>Palavras-chave:</strong> Teoria da Resposta ao Item; Modelos Politômicos; Modelo de Crédito Parcial Generalizado; Psicometria.</p><p> </p><p><strong>Contribuciones al estudio del Modelo de Crédito Parcial Generalizado</strong></p><p>Este artículo aborda los más importantes aspectos inferenciales del Modelo de Crédito Parcial Generalizado (MCPG), de la Teoría de la Respuesta al Ítem (TRI). Se presenta un estudio sobre una de las principales dificultades encontradas en el proceso de estimación e inferencia de los modelos de la TRI, que es la falta de identificabilidad. Por otra parte, se expone la interpretación de los parámetros del modelo y de la función de información del ítem y el test.</p><p><strong>Palabras clave:</strong> Teoría de la Respuesta al Ítem; Modelos Politómicos; Modelo de Crédito Parcial Generalizado; Psicometría.</p><p> </p><p><strong>Contributions to the study of Generalized Partial Credit Model</strong></p><p>This article covers the most important inferential aspects of the Generalized Partial Credit Model (GPCM) of the Item Response Theory (IRT). It presents a study on one of the main difficulties encountered in the process of estimation and inference of the IRT models, which is the lack of identifiability. In addition, it presents the interpretation of the model parameters and the information function of the item and the test.</p><p><strong>Keywords:</strong> Item Response Theory; Polytomous Models; Generalized Partial Credit Model; Psychometrics.</p>


2014 ◽  
Vol 22 (2) ◽  
pp. 323-341 ◽  
Author(s):  
Dheeraj Raju ◽  
Xiaogang Su ◽  
Patricia A. Patrician

Background and Purpose: The purpose of this article is to introduce different types of item response theory models and to demonstrate their usefulness by evaluating the Practice Environment Scale. Methods: Item response theory models such as constrained and unconstrained graded response model, partial credit model, Rasch model, and one-parameter logistic model are demonstrated. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) indices are used as model selection criterion. Results: The unconstrained graded response and partial credit models indicated the best fit for the data. Almost all items in the instrument performed well. Conclusions: Although most of the items strongly measure the construct, there are a few items that could be eliminated without substantially altering the instrument. The analysis revealed that the instrument may function differently when administered to different unit types.


2001 ◽  
Vol 26 (4) ◽  
pp. 361-380 ◽  
Author(s):  
Rebecca Holman ◽  
Martijn P. F. Berger

This article examines calibration designs, which maximize the determinant of Fisher’s information matrix on the item parameters (D-optimal), for sets of polytomously scored items. These items were analyzed using a number of item response theory (IRT) models, which are members of the “divide-by-total” family, including the nominal categories model, the rating scale model, the unidimensional polytomous Rasch model and the partial credit model. We extend the known results for dichotomous items, both singly and in tests to polytomous items. The structure of Fisher’s information matrix is examined in order to gain insights into the structure of D-optimal calibration designs for IRT models. A theorem giving an upper bound for the number of support points for such models is proved. A lower bound is also given. Finally, we examine a set of items, which have been analyzed using a number of different models. The locally D-optimal calibration design for each analysis is calculated using an exact numerical and a sequential procedure. The results are discussed both in general and in relation to each other.


2017 ◽  
Vol 78 (3) ◽  
pp. 384-408 ◽  
Author(s):  
Yong Luo ◽  
Hong Jiao

Stan is a new Bayesian statistical software program that implements the powerful and efficient Hamiltonian Monte Carlo (HMC) algorithm. To date there is not a source that systematically provides Stan code for various item response theory (IRT) models. This article provides Stan code for three representative IRT models, including the three-parameter logistic IRT model, the graded response model, and the nominal response model. We demonstrate how IRT model comparison can be conducted with Stan and how the provided Stan code for simple IRT models can be easily extended to their multidimensional and multilevel cases.


Diagnostica ◽  
2005 ◽  
Vol 51 (2) ◽  
pp. 88-100 ◽  
Author(s):  
Otto B. Walter ◽  
Janine Becker ◽  
Herbert Fliege ◽  
Jakob Bjorner ◽  
Mark Kosinski ◽  
...  

Zusammenfassung. Die empirische Erfassung psychischer Merkmale erfolgt in der Regel mit Instrumenten, die auf der Grundlage der klassischen Testtheorie entwickelt wurden. Seit den 60er Jahren bietet sich hierzu mit der Item Response Theory (IRT) eine Alternative an, die verschiedene Vorteile verspricht. Auf ihrer Grundlage können u.a. computeradaptive Tests (CATs) entwickelt werden, welche die Auswahl der vorgelegten Items dem Antwortverhalten der Patienten anpassen und damit eine höhere Messgenauigkeit bei reduzierter Itemzahl ermöglichen sollen. Wir haben verschiedene Schritte zur Entwicklung eines CAT zur Erfassung von Angst unternommen, um zu prüfen, ob sich die theoretischen Vorzüge der IRT auch in der praktischen Umsetzung bestätigen lassen. In dem vorliegenden Beitrag wird die Entwicklung der zu Grunde liegenden Itembank dargestellt. Hierfür wurde auf Daten von N = 2348 Patienten zurückgegriffen, die an der Medizinischen Klinik mit Schwerpunkt Psychosomatik der Charité zwischen 1995 und 2001 im Rahmen der Routinediagnostik ein umfangreiches Set etablierter konventioneller Fragebögen computergestützt beantwortet hatten. Diese beinhalteten 81 Items, die in einem Expertenrating für das Merkmal Angst als relevant angesehen wurden. Die Eigenschaften dieser Items wurden anhand ihrer residualen Korrelationen nach konfirmatorischer Faktorenanalyse (MplusTM), ihrer Antwortkategorienfunktion (TestgrafTM) und ihrer Diskriminationsfähigkeit (ParscaleTM) überprüft. Es verblieben 50 Items, die für die Anwendung eines polytomen Zwei-Parameter-Modells (Generalized-Partial-Credit-Model) als geeignet angesehen werden können. Orientiert man sich an einer Reliabilität von ρ ≥ .90 und legt für den computeradaptiven Testalgorithmus einen Standardfehler von ≤ .32 fest, so zeigen Simulationsstudien, dass die Merkmalsausprägung für Angst im Bereich von ± 2 Standardabweichungen um den Mittelwert der Stichprobe mit ca. 7 Items ermittelt werden kann. Zudem legen die Simulationsstudien nahe, dass der CAT-Algorithmus das Merkmal in den oberen und unteren Ausprägungen besser zu differenzieren vermag als die konventionell berechnete Summen-Skala des STAI (State).


2019 ◽  
Vol 80 (4) ◽  
pp. 726-755 ◽  
Author(s):  
Jinho Kim ◽  
Mark Wilson

This study investigates polytomous item explanatory item response theory models under the multivariate generalized linear mixed modeling framework, using the linear logistic test model approach. Building on the original ideas of the many-facet Rasch model and the linear partial credit model, a polytomous Rasch model is extended to the item location explanatory many-facet Rasch model and the step difficulty explanatory linear partial credit model. To demonstrate the practical differences between the two polytomous item explanatory approaches, two empirical studies examine how item properties explain and predict the overall item difficulties or the step difficulties each in the Carbon Cycle assessment data and in the Verbal Aggression data. The results suggest that the two polytomous item explanatory models are methodologically and practically different in terms of (a) the target difficulty parameters of polytomous items, which are explained by item properties; (b) the types of predictors for the item properties incorporated into the design matrix; and (c) the types of item property effects. The potentials and methodological advantages of item explanatory modeling are discussed as well.


2021 ◽  
Vol 117 ◽  
pp. 106849
Author(s):  
Danilo Carrozzino ◽  
Kaj Sparle Christensen ◽  
Giovanni Mansueto ◽  
Fiammetta Cosci

2021 ◽  
pp. 001316442199841
Author(s):  
Pere J. Ferrando ◽  
David Navarro-González

Item response theory “dual” models (DMs) in which both items and individuals are viewed as sources of differential measurement error so far have been proposed only for unidimensional measures. This article proposes two multidimensional extensions of existing DMs: the M-DTCRM (dual Thurstonian continuous response model), intended for (approximately) continuous responses, and the M-DTGRM (dual Thurstonian graded response model), intended for ordered-categorical responses (including binary). A rationale for the extension to the multiple-content-dimensions case, which is based on the concept of the multidimensional location index, is first proposed and discussed. Then, the models are described using both the factor-analytic and the item response theory parameterizations. Procedures for (a) calibrating the items, (b) scoring individuals, (c) assessing model appropriateness, and (d) assessing measurement precision are finally discussed. The simulation results suggest that the proposal is quite feasible, and an illustrative example based on personality data is also provided. The proposals are submitted to be of particular interest for the case of multidimensional questionnaires in which the number of items per scale would not be enough for arriving at stable estimates if the existing unidimensional DMs were fitted on a separate-scale basis.


2021 ◽  
Vol 8 (3) ◽  
pp. 672-695
Author(s):  
Thomas DeVaney

This article presents a discussion and illustration of Mokken scale analysis (MSA), a nonparametric form of item response theory (IRT), in relation to common IRT models such as Rasch and Guttman scaling. The procedure can be used for dichotomous and ordinal polytomous data commonly used with questionnaires. The assumptions of MSA are discussed as well as characteristics that differentiate a Mokken scale from a Guttman scale. MSA is illustrated using the mokken package with R Studio and a data set that included over 3,340 responses to a modified version of the Statistical Anxiety Rating Scale. Issues addressed in the illustration include monotonicity, scalability, and invariant ordering. The R script for the illustration is included.


2021 ◽  
pp. 43-48
Author(s):  
Rosa Fabbricatore ◽  
Francesco Palumbo

Evaluating learners' competencies is a crucial concern in education, and home and classroom structured tests represent an effective assessment tool. Structured tests consist of sets of items that can refer to several abilities or more than one topic. Several statistical approaches allow evaluating students considering the items in a multidimensional way, accounting for their structure. According to the evaluation's ending aim, the assessment process assigns a final grade to each student or clusters students in homogeneous groups according to their level of mastery and ability. The latter represents a helpful tool for developing tailored recommendations and remediations for each group. At this aim, latent class models represent a reference. In the item response theory (IRT) paradigm, the multidimensional latent class IRT models, releasing both the traditional constraints of unidimensionality and continuous nature of the latent trait, allow to detect sub-populations of homogeneous students according to their proficiency level also accounting for the multidimensional nature of their ability. Moreover, the semi-parametric formulation leads to several advantages in practice: It avoids normality assumptions that may not hold and reduces the computation demanding. This study compares the results of the multidimensional latent class IRT models with those obtained by a two-step procedure, which consists of firstly modeling a multidimensional IRT model to estimate students' ability and then applying a clustering algorithm to classify students accordingly. Regarding the latter, parametric and non-parametric approaches were considered. Data refer to the admission test for the degree course in psychology exploited in 2014 at the University of Naples Federico II. Students involved were N=944, and their ability dimensions were defined according to the domains assessed by the entrance exam, namely Humanities, Reading and Comprehension, Mathematics, Science, and English. In particular, a multidimensional two-parameter logistic IRT model for dichotomously-scored items was considered for students' ability estimation.


Sign in / Sign up

Export Citation Format

Share Document