The Effect of Response Model Misspecification and Uncertainty on the Psychometric Properties of Estimates

Author(s):  
Kristian E. Markon ◽  
Michael Chmielewski
2020 ◽  
Vol 47 (2) ◽  
pp. 104
Author(s):  
Bambang Suryadi ◽  
Muhammad Dwirifqi Kharisma Putra

The use of social media, especially Instagram, has become an increasingly powerful form of daily activity. This social media affects the romantic relationship of people, where people in relationships can conduct surveillance on the behaviors of their partner. This study provides an analysis of the psychometric properties of the Indonesian version of the Partner Surveillance Scale which contains 15 items and used a 4-point Likert scale format. The study recruited 214 female university students aged 17-23 years old, who used Instagram. The Graded Response Model (GRM) method was applied. As a result, the Indonesian version of the Partner Surveillance Scale was proved to have good psychometrics properties and had good fit to the GRM. All assumptions of GRM were met and the scale had high reliability. But, it should be noted that some items did not fit well with the model.  The results of this study also provide an alternative to the use of Confirmatory Factor Analysis (CFA) in analyzing polytomous data with GRM. This study concluded that the psychometric properties of the Partner Surveillance Scale were good. 


2019 ◽  
Author(s):  
Zhihao Ma ◽  
Mei Wu

BACKGROUND The eHealth Literacy Scale (eHEALS) is the most widely used instrument in health studies to measure individual’s electronic health literacy. Nonetheless, despite the rapid development of the online medical industry and increased rural-urban disparities in China, very few studies have examined the characteristics of the eHEALS among Chinese rural people by using modern psychometric methods. This study evaluated the psychometric properties of eHEALS in a Chinese rural population by using both the classical test theory and item response theory methods. OBJECTIVE This study aimed to develop a simplified Chinese version of the eHEALS (C-eHEALS) and evaluate its psychometric properties in a rural population. METHODS A cross-sectional survey was conducted with 543 rural internet users in West China. The internal reliability was assessed using the Cronbach alpha coefficient. A one-factor structure of the C-eHEALS was obtained via principal component analysis, and fit indices for this structure were calculated using confirmatory factory analysis. Subsequently, the item discrimination, difficulty, and test information were estimated via the graded response model. Additionally, the criterion validity was confirmed through hypothesis testing. RESULTS The C-eHEALS has good reliability. Both principal component analysis and confirmatory factory analysis showed that the scale has a one-factor structure. The graded response model revealed that all items of the C-eHEALS have response options that allow for differentiation between latent trait levels and the capture of substantial information regarding participants’ ability. CONCLUSIONS The findings indicate the high reliability and validity of the C-eHEALS and thus recommend its use for measuring eHealth literacy among the Chinese rural population.


10.2196/15720 ◽  
2019 ◽  
Vol 21 (10) ◽  
pp. e15720 ◽  
Author(s):  
Zhihao Ma ◽  
Mei Wu

Background The eHealth Literacy Scale (eHEALS) is the most widely used instrument in health studies to measure individual’s electronic health literacy. Nonetheless, despite the rapid development of the online medical industry and increased rural-urban disparities in China, very few studies have examined the characteristics of the eHEALS among Chinese rural people by using modern psychometric methods. This study evaluated the psychometric properties of eHEALS in a Chinese rural population by using both the classical test theory and item response theory methods. Objective This study aimed to develop a simplified Chinese version of the eHEALS (C-eHEALS) and evaluate its psychometric properties in a rural population. Methods A cross-sectional survey was conducted with 543 rural internet users in West China. The internal reliability was assessed using the Cronbach alpha coefficient. A one-factor structure of the C-eHEALS was obtained via principal component analysis, and fit indices for this structure were calculated using confirmatory factory analysis. Subsequently, the item discrimination, difficulty, and test information were estimated via the graded response model. Additionally, the criterion validity was confirmed through hypothesis testing. Results The C-eHEALS has good reliability. Both principal component analysis and confirmatory factory analysis showed that the scale has a one-factor structure. The graded response model revealed that all items of the C-eHEALS have response options that allow for differentiation between latent trait levels and the capture of substantial information regarding participants’ ability. Conclusions The findings indicate the high reliability and validity of the C-eHEALS and thus recommend its use for measuring eHealth literacy among the Chinese rural population.


2020 ◽  
pp. 001316442095806
Author(s):  
Shiyang Su ◽  
Chun Wang ◽  
David J. Weiss

[Formula: see text] is a popular item fit index that is available in commercial software packages such as flexMIRT. However, no research has systematically examined the performance of [Formula: see text] for detecting item misfit within the context of the multidimensional graded response model (MGRM). The primary goal of this study was to evaluate the performance of [Formula: see text] under two practical misfit scenarios: first, all items are misfitting due to model misspecification, and second, a small subset of items violate the underlying assumptions of the MGRM. Simulation studies showed that caution should be exercised when reporting item fit results of polytomous items using [Formula: see text] within the context of the MGRM, because of its inflated false positive rates (FPRs), especially with a small sample size and a long test. [Formula: see text] performed well when detecting overall model misfit as well as item misfit for a small subset of items when the ordinality assumption was violated. However, under a number of conditions of model misspecification or items violating the homogeneous discrimination assumption, even though true positive rates (TPRs) of [Formula: see text] were high when a small sample size was coupled with a long test, the inflated FPRs were generally directly related to increasing TPRs. There was also a suggestion that performance of [Formula: see text] was affected by the magnitude of misfit within an item. There was no evidence that FPRs for fitting items were exacerbated by the presence of a small percentage of misfitting items among them.


Author(s):  
Rabeeah M. Alsaqri ◽  
Mohsen N. Al Salmi

The study aimed to calibrate Oman data of the PIRLS test using the graded response model and to examine the psychometric properties of it, as well as identify the fit and unfit of its items. PIRLS2011 test booklets were used, which consisted of 146 test items (74 dichotomous and 72 polytomous). Items were divided into 13 booklets; each with two blocks (one literary and one informational). PIRLS test booklets were administered to 13 groups of fourth grade students in Sultanate of Oman with a total sample of 10394 students. Assumptions of IRT (unidimensionality and local independence) were examined and supported. Also, item fit was examined and supported using Samejima’s graded response model. The data was analyzed by Multilog7.03 program to estimate both item and ability parameters. Results indicated that the assumptions of IRT were proved. Also, IRT analysis revealed that 8 items showed unfit which represents only 5% of the test items. So, this result confirms that the test has good psychometric properties under the IRT.


Author(s):  
Amal K. Al-zaabi ◽  
Abdulhameed Hassan ◽  
Rashid S. Al-mehrzi

The study aimed to calibrate Oman data of the PIRLS test using the graded response model and to examine the psychometric properties of it, as well as identify the fit and unfit of its items. PIRLS2011 test booklets were used, which consisted of 146 test items (74 dichotomous and 72 polytomous). Items were divided into 13 booklets; each with two blocks (one literary and one informational). PIRLS test booklets were administered to 13 groups of fourth grade students in Sultanate of Oman with a total sample of 10394 students. Assumptions of IRT (unidimensionality and local independence) were examined and supported. Also, item fit was examined and supported using Samejima’s graded response model. The data was analyzed by Multilog7.03 program to estimate both item and ability parameters. Results indicated that the assumptions of IRT were proved. Also, IRT analysis revealed that 8 items showed unfit which represents only 5% of the test items. So, this result confirms that the test has good psychometric properties under the IRT.


2017 ◽  
Vol 14 (1) ◽  
pp. 113-117 ◽  
Author(s):  
Daniel S.J. Costa ◽  
Ali Asghari ◽  
Michael K. Nicholas

AbstractBackground and aimsThe Pain Self-Efficacy Questionnaire (PSEQ) is a 10-item instrument designed to assess the extent to which a person in pain believes s/he is able to accomplish various activities despite their pain. There is strong evidence for the validity and reliability of both the full-length PSEQ and a 2-item version. The purpose of this study is to further examine the properties of the PSEQ using an item response theory (IRT) approach.MethodsWe used the two-parameter graded response model to examine the category probability curves, and location and discrimination parameters of the 10 PSEQ items. In item response theory, responses to a set of items are assumed to be probabilistically determined by a latent (unobserved) variable. In the graded-response model specifically, item response threshold (the value of the latent variable for which adjacent response categories are equally likely) and discrimination parameters are estimated for each item. Participants were 1511 mixed, chronic pain patients attending for initial assessment at a tertiary pain management centre.ResultsAll items except item 7 (‘I can cope with my pain without medication’) performed well in IRT analysis, and the category probability curves suggested that participants used the 7-point response scale consistently. Items 6 (‘I can still do many of the things I enjoy doing, such as hobbies or leisure activity, despite pain’), 8 (‘I can still accomplish most of my goals in life, despite the pain’) and 9 (‘I can live a normal lifestyle, despite the pain’) captured higher levels of the latent variable with greater precision.ConclusionsThe results from this IRT analysis add to the body of evidence based on classical test theory illustrating the strong psychometric properties of the PSEQ. Despite the relatively poor performance of Item 7, its clinical utility warrants its retention in the questionnaire.ImplicationsThe strong psychometric properties of the PSEQ support its use as an effective tool for assessing self-efficacy in people with pain.


Sign in / Sign up

Export Citation Format

Share Document