Mokken Scale Analysis: Discussion and Application

2021 ◽  
Vol 8 (3) ◽  
pp. 672-695
Author(s):  
Thomas DeVaney

This article presents a discussion and illustration of Mokken scale analysis (MSA), a nonparametric form of item response theory (IRT), in relation to common IRT models such as Rasch and Guttman scaling. The procedure can be used for dichotomous and ordinal polytomous data commonly used with questionnaires. The assumptions of MSA are discussed as well as characteristics that differentiate a Mokken scale from a Guttman scale. MSA is illustrated using the mokken package with R Studio and a data set that included over 3,340 responses to a modified version of the Statistical Anxiety Rating Scale. Issues addressed in the illustration include monotonicity, scalability, and invariant ordering. The R script for the illustration is included.

2003 ◽  
Vol 11 (2) ◽  
pp. 139-163 ◽  
Author(s):  
Wijbrandt H. van Schuur

This article introduces a model of ordinal unidimensional measurement known as Mokken scale analysis. Mokken scaling is based on principles of Item Response Theory (IRT) that originated in the Guttman scale. I compare the Mokken model with both Classical Test Theory (reliability or factor analysis) and parametric IRT models (especially with the one-parameter logistic model known as the Rasch model). Two nonparametric probabilistic versions of the Mokken model are described: the model of Monotone Homogeneity and the model of Double Monotonicity. I give procedures for dealing with both dichotomous and polytomous data, along with two scale analyses of data from the World Values Study that demonstrate the usefulness of the Mokken model.


2001 ◽  
Vol 26 (4) ◽  
pp. 361-380 ◽  
Author(s):  
Rebecca Holman ◽  
Martijn P. F. Berger

This article examines calibration designs, which maximize the determinant of Fisher’s information matrix on the item parameters (D-optimal), for sets of polytomously scored items. These items were analyzed using a number of item response theory (IRT) models, which are members of the “divide-by-total” family, including the nominal categories model, the rating scale model, the unidimensional polytomous Rasch model and the partial credit model. We extend the known results for dichotomous items, both singly and in tests to polytomous items. The structure of Fisher’s information matrix is examined in order to gain insights into the structure of D-optimal calibration designs for IRT models. A theorem giving an upper bound for the number of support points for such models is proved. A lower bound is also given. Finally, we examine a set of items, which have been analyzed using a number of different models. The locally D-optimal calibration design for each analysis is calculated using an exact numerical and a sequential procedure. The results are discussed both in general and in relation to each other.


2020 ◽  
Vol 80 (4) ◽  
pp. 665-694
Author(s):  
Ken A. Fujimoto ◽  
Sabina R. Neugebauer

Although item response theory (IRT) models such as the bifactor, two-tier, and between-item-dimensionality IRT models have been devised to confirm complex dimensional structures in educational and psychological data, they can be challenging to use in practice. The reason is that these models are multidimensional IRT (MIRT) models and thus are highly parameterized, making them only suitable for data provided by large samples. Unfortunately, many educational and psychological studies are conducted on a small scale, leaving the researchers without the necessary MIRT models to confirm the hypothesized structures in their data. To address the lack of modeling options for these researchers, we present a general Bayesian MIRT model based on adaptive informative priors. Simulations demonstrated that our MIRT model could be used to confirm a two-tier structure (with two general and six specific dimensions), a bifactor structure (with one general and six specific dimensions), and a between-item six-dimensional structure in rating scale data representing sample sizes as small as 100. Although our goal was to provide a general MIRT model suitable for smaller samples, the simulations further revealed that our model was applicable to larger samples. We also analyzed real data from 121 individuals to illustrate that the findings of our simulations are relevant to real situations.


2021 ◽  
Vol 117 ◽  
pp. 106849
Author(s):  
Danilo Carrozzino ◽  
Kaj Sparle Christensen ◽  
Giovanni Mansueto ◽  
Fiammetta Cosci

2017 ◽  
Vol 6 (4) ◽  
pp. 113
Author(s):  
Esin Yilmaz Kogar ◽  
Hülya Kelecioglu

The purpose of this research is to first estimate the item and ability parameters and the standard error values related to those parameters obtained from Unidimensional Item Response Theory (UIRT), bifactor (BIF) and Testlet Response Theory models (TRT) in the tests including testlets, when the number of testlets, number of independent items, and sample size change, and then to compare the obtained results. Mathematic test in PISA 2012 was employed as the data collection tool, and 36 items were used to constitute six different data sets containing different numbers of testlets and independent items. Subsequently, from these constituted data sets, three different sample sizes of 250, 500 and 1000 persons were selected randomly. When the findings of the research were examined, it was determined that, generally the lowest mean error values were those obtained from UIRT, and TRT yielded a mean of error estimation lower than that of BIF. It was found that, under all conditions, models which take into consideration the local dependency have provided a better model-data compatibility than UIRT, generally there is no meaningful difference between BIF and TRT, and both models can be used for those data sets. It can be said that when there is a meaningful difference between those two models, generally BIF yields a better result. In addition, it has been determined that, in each sample size and data set, item and ability parameters and correlations of errors of the parameters are generally high.


Sign in / Sign up

Export Citation Format

Share Document