Response Category Functioning on the Health Care Engagement Measure Using the Nominal Response Model

Assessment ◽  
2021 ◽  
pp. 107319112110526
Author(s):  
Steven P. Reise ◽  
Anne S. Hubbard ◽  
Emily F. Wong ◽  
Benjamin D. Schalet ◽  
Mark G. Haviland ◽  
...  

As part of a scale development project, we fit a nominal response item response theory model to responses to the Health Care Engagement Measure (HEM). When using the original 5-point response format, categories were not ordered as intended for six of the 23 items. For the remaining, the category boundary discrimination between Categories 0 ( not at all true) and 1 ( a little bit true) was only weakly discriminating, suggesting uninformative categories. When the lowest two categories were collapsed, psychometric properties improved greatly. Category boundary discriminations within items, however, varied significantly. Specifically, higher response category distinctions, such as responding 3 ( very true) versus 2 ( mostly true) were considerably more discriminating than lower response category distinctions. Implications for HEM scoring and for improving measurement precision at lower levels of the construct are presented as is the unique role of the nominal response model in category analysis.

2018 ◽  
Vol 23 (2) ◽  
pp. 342-366 ◽  
Author(s):  
Jiyun Zu ◽  
Patrick C. Kyllonen

We evaluated the use of the nominal response model (NRM) to score multiple-choice (also known as “select the best option”) situational judgment tests (SJTs). Using data from two large studies, we compared the reliability and correlations of NRM scores with those from various classical and item response theory (IRT) scoring methods. The SJTs measured emotional management (Study 1) and teamwork and collaboration (Study 2). In Study 1 the NRM scoring method was shown to be superior in reliability and in yielding higher correlations with external measures to three classical test theory–based and four other IRT-based methods. In Study 2, only slight differences between scoring methods were observed. An explanation for the discrepancy in findings is that in cases where item keys are ambiguous (as in Study 1), the NRM accommodates that ambiguity, but in cases where item keys are clear (as in Study 2), different methods provide interchangeable scores. We characterize ambiguous and clear keys using category response curves based on parameter estimates of the NRM and discuss the relationships between our findings and those from the wisdom-of-the-crowd literature.


2020 ◽  
pp. 014662162096574
Author(s):  
Zhonghua Zhang

Researchers have developed a characteristic curve procedure to estimate the parameter scale transformation coefficients in test equating under the nominal response model. In the study, the delta method was applied to derive the standard error expressions for computing the standard errors for the estimates of the parameter scale transformation coefficients. This brief report presents the results of a simulation study that examined the accuracy of the derived formulas and compared the performance of this analytical method with that of the multiple imputation method. The results indicated that the standard errors produced by the delta method were very close to the criterion standard errors as well as those yielded by the multiple imputation method under all the simulation conditions.


2015 ◽  
Vol 75 (6) ◽  
pp. 901-930 ◽  
Author(s):  
Kathleen Suzanne Johnson Preston ◽  
Skye N. Parral ◽  
Allen W. Gottfried ◽  
Pamella H. Oliver ◽  
Adele Eskeles Gottfried ◽  
...  

2007 ◽  
Vol 31 (3) ◽  
pp. 213-232 ◽  
Author(s):  
Valeria Lima Passos ◽  
Martijn P. F. Berger ◽  
Frans E. Tan

Sign in / Sign up

Export Citation Format

Share Document