scholarly journals Scale Development via Network Analysis: A Comprehensive and Concise Measure of Openness to Experience

2018 ◽  
Author(s):  
Alexander P. Christensen ◽  
Katherine Cotter ◽  
Paul Silvia ◽  
Mathias Benedek

Psychometric network analysis is an emerging tool to investigate the structure of psychological and psychopathological constructs. To date, most of the psychometric network literature has emphasized the measurement of constructs (e.g., dimensional structure); however, this represents only one aspect of psychometrics. In the present study, we explored whether network analysis could be used as a tool for scale development. To do so, we used a previously published dataset (N = 794) of four Openness to Experience inventories to clarify the facet structure of the construct and identify the conceptual coverage of each inventory. In short, 10 facets and 3 aspects (i.e., meso-facets) were identified but no single inventory adequately covered all facets or aspects. Therefore, we used network analysis, including two novel network measures (community closeness centrality and network coverage), to develop a short measure that comprehensively measured all facets and aspects of the construct. We then compared the network-derived short form to short forms that were developed using classical test theory (CTT) and item response theory (IRT). The network-derived short form demonstrated comparable reliability to the CTT- and IRT-based short forms but had better coverage of the conceptual space (defined by the four inventories). Finally, we validated the network-derived short form by comparing its correlations with outcome measures (personality and political conservatism) to that of the four-inventory item pool. We found that the network approach is a promising psychometric tool for scale development and we discuss its implications for future applications.

Pedagogika ◽  
2017 ◽  
Vol 127 (3) ◽  
pp. 104-118
Author(s):  
Gediminas Merkys ◽  
Daiva Bubelienė

In the article a newly created questionnaire intended for older schoolchildren – “evaluate the teacher and his lessons” is introduced. The theoretical and practical context of the instrument based on 87 primary questions is named, the dimensional structure and metrological quality of the formed integrated scales and sub-scales is presented. The scales and sub-scales were formed following the classical test theory, combining logical and factorial validation. The secondary sub-scale factorization has indicated that it is expedient to distinguish between two integrated lesson dimensions (scales). The first integrated scale reflects the quality of social relations and teacher-centered orientation. The second scale reflects the management and didactics of the educational process. High correlation between the evaluations of integrated scales (r = 0.86) indicates that a generalized integrated index of evaluation of the teacher and his lesson can be derived by aggregating even 81 primary variables defining the most various aspects of the lesson. In the article the basis of statistic norming of the questionnaire possessed at present is described: Nschool children = 4024 and Nteachers = 200 which encompasses schools of different types from various regions of the country. The wide coverage of the content of created questionnaire, quite good quality of the scales opens good opportunities for its application in both the practice of schools evaluation and research. First of all, the methodical purpose of the article has been to introduce a new standardized instrument of survey. Secondly, the question why such indicators as “abundance of homework” and “level of the requirements set by the teacher” practically do not correlate with all the remaining scales, although the latter intercorrelate very significantly, is set. In the paper the question (and hypotheses) whether the mentioned variables can truly affect the didactic quality of the lesson counterproductively is elaborated.


Author(s):  
Michael J. Zickar

Psychological measurement is at the heart of organizational research. I review recent practices in the area of measurement development and evaluation, detailing best practice recommendations in both of these areas. Throughout the article, I stress that theory and discovery should guide scale development and that statistical tools, although they play a crucial role, should be chosen to best evaluate the theoretical underpinnings of scales as well as to best promote discovery. I review all stages of scale development and evaluation, ranging from construct specification and item writing, to scale revision. Different statistical frameworks are considered, including classical test theory, exploratory factor analysis, confirmatory factor analysis, and item response theory, and I encourage readers to consider how best to use each of these tools to capitalize on each approach's particular strengths.


2019 ◽  
Vol 43 (4) ◽  
pp. 222-229 ◽  
Author(s):  
Chung-Ying Lin ◽  
Anders Broström ◽  
Mark D. Griffiths ◽  
Amir H. Pakpour

The purpose of the present study was to examine the psychometric properties of the eHealth Literacy Scale (eHEALS) using classical test theory and modern test theory among elderly Iranian individuals with heart failure (HF). Individuals with objectively verified HF ( n = 388, 234 males, mean age = 68.9 ± 3.4) completed the (i) eHEALS, (ii) Hospital Anxiety and Depression Scale, (iii) Short Form 12, (iv) 9-item European Heart Failure Self-Care Behavior Scale, and (v) 5-item Medication Adherence Report Scale. Two types of analyses were carried out to evaluate the factorial structure of the eHEALS: (i) confirmatory factor analysis (CFA) in classical test theory and (ii) Rasch analysis in modern test theory. A regression model was constructed to examine the associations between eHEALS and other instruments. CFA supported the one-factor structure of the eHEALS with significant factor loadings for all items. Rasch analysis also supported the unidimensionality of the eHEALS with item fit statistics ranging between 0.5 and 1.5. The eHEALS was significantly associated with all the external criteria. The eHEALS is suitable for health-care providers to assess eHealth literacy for individuals with HF.


2021 ◽  
pp. 089443932199461
Author(s):  
Erica R. Fissel ◽  
Amanda Graham ◽  
Leah C. Butler ◽  
Bonnie S. Fisher

As technology advances, new opportunities for partners to gain power and control in their romantic relationships are readily available. New cyber-based behaviors have slowly garnered scholarly attention, but measurement-related issues have not. We take the logical next steps to (1) develop and validate a comprehensive measure of intimate partner cyber abuse (IPCA) for adults using classical test theory and item response theory and (2) estimate IPCA prevalence rate for a range of relationship types. A sample of 1,500 adults, currently in an intimate partner relationship, 18 years or older, and living in the United States, completed an online questionnaire about their IPCA experiences within the 6 months prior. Two parameter logistic modeling and confirmatory factor analyses revealed a five-dimensional structure: cyber direct aggression, cyber sexual coercion, cyber financial control, cyber control, and cyber monitoring, with 14.85% of the sample experiencing at least one dimension. These IPCA dimensions were examined for differential functioning across gender identity, race, student status, and relationship type. Collectively, the findings have implications for IPCA measurement and related research, including theoretically derived hypotheses whose findings can inform prevention.


2014 ◽  
Vol 35 (4) ◽  
pp. 201-211 ◽  
Author(s):  
André Beauducel ◽  
Anja Leue

It is shown that a minimal assumption should be added to the assumptions of Classical Test Theory (CTT) in order to have positive inter-item correlations, which are regarded as a basis for the aggregation of items. Moreover, it is shown that the assumption of zero correlations between the error score estimates is substantially violated in the population of individuals when the number of items is small. Instead, a negative correlation between error score estimates occurs. The reason for the negative correlation is that the error score estimates for different items of a scale are based on insufficient true score estimates when the number of items is small. A test of the assumption of uncorrelated error score estimates by means of structural equation modeling (SEM) is proposed that takes this effect into account. The SEM-based procedure is demonstrated by means of empirical examples based on the Edinburgh Handedness Inventory and the Eysenck Personality Questionnaire-Revised.


2019 ◽  
Vol 35 (1) ◽  
pp. 55-62 ◽  
Author(s):  
Noboru Iwata ◽  
Akizumi Tsutsumi ◽  
Takafumi Wakita ◽  
Ryuichi Kumagai ◽  
Hiroyuki Noguchi ◽  
...  

Abstract. To investigate the effect of response alternatives/scoring procedures on the measurement properties of the Center for Epidemiologic Studies Depression Scale (CES-D) which has the four response alternatives, a polytomous item response theory (IRT) model was applied to the responses of 2,061 workers and university students (1,640 males, 421 females). Test information functions derived from the polytomous IRT analyses on the CES-D data with various scoring procedures indicated that: (1) the CES-D with its standard (0-1-2-3) scoring procedure should be useful for screening to detect subjects with “at high-risk” of depression if the θ point showing the highest information corresponds to the cut-off point, because of its extremely higher information; (2) the CES-D with the 0-1-1-2 scoring procedure could cover wider range of depressive severity, suggesting that this scoring procedure might be useful in cases where more exhaustive discrimination in symptomatology is of interest; and (3) the revised version of CES-D with replacing original positive items into negatively revised items outperformed the original version. These findings have never been demonstrated by the classical test theory analyses, and thus the utility of this kind of psychometric testing should be warranted to further investigation for the standard measures of psychological assessment.


Sign in / Sign up

Export Citation Format

Share Document