mokken scale analysis
Recently Published Documents


TOTAL DOCUMENTS

71
(FIVE YEARS 25)

H-INDEX

17
(FIVE YEARS 1)

2021 ◽  
Vol 12 ◽  
Author(s):  
Mahdi Rezapour ◽  
Cristopher Veenstra ◽  
Kelly Cuccolo ◽  
F. Richard Ferraro

This study assessed the validity of instrument including various negative psychological and physical behaviors of commuters due to the public transport delay. Instruments have been mostly evaluated by parametric method of item response theory (IRT). However, the IRT has been characterized by some restrictive assumptions about the data, focusing on detailed model fit evaluation. The Mokken scale analysis (MSA), as a scaling procedure is a non-parametric method, which does not require adherence to any distribution. The results of the study show that in most regards, our instrument meets the minimum requirements highlighted by the MSA. However, the instrument did not adhere to the minimum requirements of the “scalability” for two variables including “stomach pain” and “increased heart rate”. So, modifications were proposed to address the violations. Although MSA technique has been used frequently in other fields, this is one of the earliest studies to implement the technique in the context of transport psychology.


2021 ◽  
Vol 64 (10) ◽  
pp. 3983-3994
Author(s):  
Yu-Yu Hsiao ◽  
Cathy Huaqing Qi ◽  
Robert Hoy ◽  
Philip S. Dale ◽  
Glenda S. Stump ◽  
...  

Purpose This study examined the psychometric properties of the Preschool Language Scales–Fifth Edition (PLS-5 English) among preschool children from low–socioeconomic status (SES) families. Method The PLS-5 was administered individually to 169 3- to 4-year-old children enrolled in Head Start programs. We carried out a Mokken scale analysis (MSA), which is a nonparametric item response theory analysis, to examine the hierarchy among items and the reliability of test scores of the PLS-5 Auditory Comprehension (AC) and Expressive Communication (EC) scales. Results The PLS-5 EC items retained a moderate Mokken scale with the inclusion of all the items. On the other hand, the PLS-5 AC items formed a moderate Mokken scale only with the exclusion of five unscalable items. The latent class reliability coefficients for the AC and the EC scale scores were both above .90. Several items that violated the invariant item ordering assumption were found for both scales. Conclusions MSA can be used to examine the relationship between the latent language ability and the probability of passing an item with ordinal responses. Results indicate that for preschool children from low-SES families, it is appropriate to use the PLS-5 EC scale scores for comparing individuals' expressive language abilities; however, researchers and speech-language pathologists should be cautious when using the PLS-5 AC scale scores to evaluate individuals' receptive language abilities. Other implications of the MSA results are further discussed.


2021 ◽  
pp. 001316442110453
Author(s):  
Stefanie A. Wind

Researchers frequently use Mokken scale analysis (MSA), which is a nonparametric approach to item response theory, when they have relatively small samples of examinees. Researchers have provided some guidance regarding the minimum sample size for applications of MSA under various conditions. However, these studies have not focused on item-level measurement problems, such as violations of monotonicity or invariant item ordering (IIO). Moreover, these studies have focused on problems that occur for a complete sample of examinees. The current study uses a simulation study to consider the sensitivity of MSA item analysis procedures to problematic item characteristics that occur within limited ranges of the latent variable. Results generally support the use of MSA with small samples ( N around 100 examinees) as long as multiple indicators of item quality are considered.


Author(s):  
Daniela R. Crișan ◽  
Jorge N. Tendeiro ◽  
Rob R. Meijer

Abstract Purpose In Mokken scaling, the Crit index was proposed and is sometimes used as evidence (or lack thereof) of violations of some common model assumptions. The main goal of our study was twofold: To make the formulation of the Crit index explicit and accessible, and to investigate its distribution under various measurement conditions. Methods We conducted two simulation studies in the context of dichotomously scored item responses. We manipulated the type of assumption violation, the proportion of violating items, sample size, and quality. False positive rates and power to detect assumption violations were our main outcome variables. Furthermore, we used the Crit coefficient in a Mokken scale analysis to a set of responses to the General Health Questionnaire (GHQ-12), a self-administered questionnaire for assessing current mental health. Results We found that the false positive rates of Crit were close to the nominal rate in most conditions, and that power to detect misfit depended on the sample size, type of violation, and number of assumption-violating items. Overall, in small samples Crit lacked the power to detect misfit, and in larger samples power differed considerably depending on the type of violation and proportion of misfitting items. Furthermore, we also found in our empirical example that even in large samples the Crit index may fail to detect assumption violations. Discussion Even in large samples, the Crit coefficient showed limited usefulness for detecting moderate and severe violations of monotonicity. Our findings are relevant to researchers and practitioners who use Mokken scaling for scale and questionnaire construction and revision.


2021 ◽  
Author(s):  
Ina Saliasi ◽  
Prescilla Martinon ◽  
Emily Darlington ◽  
Colette Smentek ◽  
Delphine Tardivo ◽  
...  

BACKGROUND In the recent decades, the number of apps promoting health behaviors and health-related strategies and interventions has increased alongside the number of smartphone users. Nevertheless, the validity process for measuring and reporting app quality remains unsatisfactory for health professionals and end users and represents a public health concern. The Mobile Application Rating Scale (MARS) is a tool validated and widely used in the scientific literature to evaluate and compare mHealth app functionalities. However, MARS is not adapted to the French culture nor to the language. OBJECTIVE This study aims to translate, adapt, and validate the equivalent French version of MARS (ie, MARS-F). METHODS The original MARS was first translated to French by two independent bilingual scientists, and their common version was blind back-translated twice by two native English speakers, culminating in a final well-established MARS-F. Its comprehensibility was then evaluated by 6 individuals (3 researchers and 3 nonacademics), and the final MARS-F version was created. Two bilingual raters independently completed the evaluation of 63 apps using MARS and MARS-F. Interrater reliability was assessed using intraclass correlation coefficients. In addition, internal consistency and validity of both scales were assessed. Mokken scale analysis was used to investigate the scalability of both MARS and MARS-F. RESULTS MARS-F had a good alignment with the original MARS, with properties comparable between the two scales. The correlation coefficients (<i>r</i>) between the corresponding dimensions of MARS and MARS-F ranged from 0.97 to 0.99. The internal consistencies of the MARS-F dimensions <i>engagement</i> (<i>ω</i>=0.79), <i>functionality</i> (<i>ω</i>=0.79), <i>esthetics</i> (<i>ω</i>=0.78), and <i>information quality</i> (<i>ω</i>=0.61) were acceptable and that for the overall MARS score (<i>ω</i>=0.86) was good. Mokken scale analysis revealed a strong scalability for MARS (Loevinger H=0.37) and a good scalability for MARS-F (H=0.35). CONCLUSIONS MARS-F is a valid tool, and it would serve as a crucial aid for researchers, health care professionals, public health authorities, and interested third parties, to assess the quality of mHealth apps in French-speaking countries.


2021 ◽  
Vol 8 (3) ◽  
pp. 672-695
Author(s):  
Thomas DeVaney

This article presents a discussion and illustration of Mokken scale analysis (MSA), a nonparametric form of item response theory (IRT), in relation to common IRT models such as Rasch and Guttman scaling. The procedure can be used for dichotomous and ordinal polytomous data commonly used with questionnaires. The assumptions of MSA are discussed as well as characteristics that differentiate a Mokken scale from a Guttman scale. MSA is illustrated using the mokken package with R Studio and a data set that included over 3,340 responses to a modified version of the Statistical Anxiety Rating Scale. Issues addressed in the illustration include monotonicity, scalability, and invariant ordering. The R script for the illustration is included.


2021 ◽  
Vol 39 ◽  
pp. 100793
Author(s):  
Marion Tillema ◽  
Samantha Bouwmeester ◽  
Peter Verkoeijen ◽  
Anita Heijltjes

Sign in / Sign up

Export Citation Format

Share Document