Item-level factor analysis.

Author(s):  
Brian D. Stucky ◽  
Nisha C. Gottfredson ◽  
A. T. Panter
Keyword(s):  
2021 ◽  
pp. 144-169
Author(s):  
Anatoly N. Krichevets ◽  
Alexey A. Korneev ◽  
K.V. Sugonyaev

Relevance. Nowadays the researchers commonly use a limited set of standard procedures and statistical coefficients when develop psychometric instruments and investigate their structure. The routine using of such procedures without taking into account the specific features of the psychometric scales can lead to incomplete or even inadequate results. In this context detailed consideration of the structure of psychometric instruments seems to be important and it may demand various non-standard ways of statistical analysis. Objectives. To conduct detailed analysis of the results of two intelligent subtests at the item level and to assess the sufficiency and adequacy of using standard methods for estimation of reliability and structural validity for these subtests. Methods. We analyze the data collected in intelligence testing of a large sample of respondents (11335 young adults). The respondents passed the KR-3 battery. In this study we examine in detail the structure of the subtests “Syllogisms” and “Analogies”. Specifically, we estimated the reliability of the scales by the Cronbach’s alpha coefficient, and the structure at the item level using the confirmatory factor analysis. Results and conclusions. Estimation of the reliability of the scales by Cronbach’s alpha coefficient showed importance of taking into account the time limitation, which is commonly used in intelligence tests. On the other hand, a detailed analysis of each subtest items made it possible to find out an additional factor which was not originally proposed in the factor structure. This is factor of higher-order abilities of abstract analysis, whilst the subtest originally aimed at estimation of the special abilities. Confirmatory factor analysis showed improvement of fit when this factor was added. The results allow to conclude that the researcher may miss the important properties of scales if not making a detailed analysis of testing procedures and the structure of subtest at the item level, and so may draw incomplete or inadequate conclusions about their psychometric properties


2001 ◽  
Vol 23 (2) ◽  
pp. 202 ◽  
Author(s):  
Gordon Robson ◽  
Hideko Midorikawa

This study looks at the internal reliability of the Strategy Inventory for Language Learning (Oxford, 1 990), using the ESL/EFL version in Japanese translation. The results of the Cronbach’s alpha analysis indicate a high degree of reliability for the overall questionnaire, but less so for the six subsections. Moreover, the test-retest correlations for the two administrations are extremely low with an average shared variance of 1 9.5 percent at the item level and 25.5 percent at the subsection level. In addition, the construct validity of the SILL was examined using exploratory factor analysis. While the SILL claims to be measuring six types of strategies, the two factor analyses include as many as 1 5 factors. Moreover, an attempt to fit the two administrations into a six-factor solution results in a disorganized scattering of the questionnaire items. Finally, interviews with participating students raised questions about the ability of participants to understand the metalanguage used in the questionnaire as well as the appropriateness of some items for a Japanese and EFL setting. The authors conclude that despite the popularity of the SILL, use and interpretation of its results are problematic. 本研究は、Oxford(1990)の外国語学習ストラテジー・インベントリー (SILL)のEFL/ESL用日本語版の内部信頼性及び構成概念妥当性を実験と統計に よって検証したものである。クロンバック・アルファ検定による内部信頼性 については、インベントリーの全項目は全体としては信頼性が高かったが、 6タイプのサブカテゴリーに分類されたストラテジーについては信頼性が低か った。また、インベントリーを用いたテスト・再テストの相関は低く、全項 目では平均寄与率19.5パーセント、サブカテゴリーでは25.5パーセントであっ た。構成概念妥当性検定のための説明的因子分析の結果は、6タイプのストラ テジーが15因子に細分化されたこと、さらに、全項目を6因子に分けた結果、 それぞれの因子が無秩序に分類される結果となった。最後に、インタビュー によって、この実験に参加した被験者学生にインベントリーの各項目の内容 理解について確認した結果、日本語がわかりにくく判断しいくい記述、日本 のEFLの状況では理解しにくい記述があることが明らかになった。以上のす べてから、SILLの実用的評価にもかかわらず、それを用いること、また、そ こから得た結果の解釈には問題が含まれているというのが、本研究の研究者 が得た結論である。


1997 ◽  
Vol 13 (1) ◽  
pp. 43-49 ◽  
Author(s):  
Michael E. Robinson ◽  
Joseph L. Riley ◽  
Cynthia D. Myers ◽  
Ian J. Sadler ◽  
Steven A. Kvaal ◽  
...  

1998 ◽  
Vol 28 (5) ◽  
pp. 1179-1188 ◽  
Author(s):  
C. K. W. SCHOTTE ◽  
D. de DONCKER ◽  
C. VANKERCKHOVEN ◽  
H. VERTOMMEN ◽  
P. COSYNS

Background. Self-report instruments assessing the DSM personality disorders are characterized by overdiagnosis due to their emphasis on the measurement of personality traits rather than the impairment and distress associated with the criteria.Methods. The ADP-IV, a Dutch questionnaire, introduces an alternative assessment method: each test item assesses ‘Trait’ as well as ‘Distress/impairment’ characteristics of a DSM-IV criterion. This item format allows dimensional as well as categorical diagnostic evaluations. The present study explores the validity of the ADP-IV in a sample of 659 subjects of the Flemish population.Results. The dimensional personality disorder subscales, measuring Trait characteristics, are internally consistent and display a good concurrent validity with the Wisconsin Personality Disorders Inventory. Factor analysis at the item-level resulted in 11 orthogonal factors, describing personality dimensions such as psychopathy, social anxiety and avoidance, negative affect and self-image. Factor analysis at the subscale-level identified two basic dimensions, reflecting hostile (DSM-IV Cluster B) and anxious (DSM-IV Cluster C) interpersonal attitudes. Categorical ADP-IV diagnoses are obtained using scoring algorithms, which emphasize the Trait or the Distress concepts in the diagnostic evaluation. Prevalences of ADP-IV diagnoses of any personality disorder according to these algorithms vary between 2·28 and 20·64%.Conclusions. Although further research in clinical samples is required, the present results support the validity of the ADP-IV and the potential of the measurement of trait and distress characteristics as a method for assessing personality pathology.


2020 ◽  
Author(s):  
E. Damiano D'Urso ◽  
Kim De Roover ◽  
Jeroen K. Vermunt ◽  
Jesper Tijmstra

In social sciences, the study of group differences concerning latent constructs is ubiquitous. These constructs are generally measured by means of scales composed of ordinal items. In order to compare these constructs across groups, one crucial requirement is that they are measured equivalently or, in technical jargon, that measurement invariance holds across the groups. This study compared the performance of multiple group categorical confirmatory factor analysis (MG-CCFA) and multiple group item response theory (MG-IRT) in testing measurement invariance with ordinal data. A simulation study was conducted to compare the true positive rate (TPR) and false positive rate (FPR) both at the scale and at the item level for these two approaches under an invariance and a non-invariance scenario. The results of the simulation studies showed that the performance, in terms of the TPR, of MG-CCFA- and MG-IRT-based approaches mostly depends on the scale length. In fact, for long scales, the likelihood ratio test (LRT) approach, for MG-IRT, outperformed the other approaches, while, for short scales, MG-CCFA seemed to be generally preferable. In addition, the performance of MG-CCFA's fit measures, such as RMSEA and CFI, seemed to depend largely on the length of the scale, especially when MI was tested at the item level. General caution is recommended when using these measures, especially when MI is tested for each item individually. A decision flowchart, based on the results of the simulation studies, is provided to help summarizing the results and providing indications on which approach performed best and in which setting.


2018 ◽  
Vol 22 (1) ◽  
pp. 21-38
Author(s):  
Amir Hossein Sarkeshikian ◽  
Abdol-Majid Tabatabaee ◽  
Maryam Taleb Doaee

Abstract This study investigated the psychometric properties of self-regulating capacity in vocabulary learning scale (Tseng, Dornyei, & Schmitt, 2006) in the Iranian EFL context. For this purpose, a sample of 1167 high school students completed the Persian SRCvoc in the main phase. The internal consistency reliability of the scale was examined using Cronbach’s alpha. It showed acceptable reliability in both piloting and main phases. The results of exploratory factor analysis (EFA) showed that the SRCvoc is composed of three factors. However, confirmatory factor analysis (CFA) on the three-factor model of the SRCvoc and Tseng et al.’s (2006) five-factor model of the SRCvoc with item-level indicators showed that both models did not fit the data. The findings of this study imply that the item-parcels in Tseng et al. (2006) may have masked the nature of the factor structure of the self-regulating capacity in vocabulary learning scale. It should therefore be re-theorized.


2000 ◽  
Vol 74 (3) ◽  
pp. 400-422 ◽  
Author(s):  
Bernard T. Leonelli ◽  
Chih-Hung Chang ◽  
R. Darrell Bock ◽  
Stephen G. Schilling

1984 ◽  
Vol 47 (1) ◽  
pp. 105-114 ◽  
Author(s):  
James H. Johnson ◽  
Cynthia Null ◽  
James N. Butcher ◽  
Kathy N. Johnson
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document