The Fear of COVID-19 Scale: A Reliability Generalization Meta-Analysis

Assessment ◽  
2021 ◽  
pp. 107319112199416
Author(s):  
Desirée Blázquez-Rincón ◽  
Juan I. Durán ◽  
Juan Botella

A reliability generalization meta-analysis was carried out to estimate the average reliability of the seven-item, 5-point Likert-type Fear of COVID-19 Scale (FCV-19S), one of the most widespread scales developed around the COVID-19 pandemic. Different reliability coefficients from classical test theory and the Rasch Measurement Model were meta-analyzed, heterogeneity among the most reported reliability estimates was examined by searching for moderators, and a predictive model to estimate the expected reliability was proposed. At least one reliability estimate was available for a total of 44 independent samples out of 42 studies, being that Cronbach’s alpha was most frequently reported. The coefficients exhibited pooled estimates ranging from .85 to .90. The moderator analyses led to a predictive model in which the standard deviation of scores explained 36.7% of the total variability among alpha coefficients. The FCV-19S has been shown to be consistently reliable regardless of the moderator variables examined.

2021 ◽  
pp. 109442812110115
Author(s):  
Ze Zhu ◽  
Alan J. Tomassetti ◽  
Reeshad S. Dalal ◽  
Shannon W. Schrader ◽  
Kevin Loo ◽  
...  

Policy capturing is a widely used technique, but the temporal stability of policy-capturing judgments has long been a cause for concern. This article emphasizes the importance of reporting reliability, and in particular test-retest reliability, estimates in policy-capturing studies. We found that only 164 of 955 policy-capturing studies (i.e., 17.17%) reported a test-retest reliability estimate. We then conducted a reliability generalization meta-analysis on policy-capturing studies that did report test-retest reliability estimates—and we obtained an average reliability estimate of .78. We additionally examined 16 potential methodological and substantive antecedents to test-retest reliability (equivalent to moderators in validity generalization studies). We found that test-retest reliability was robust to variation in 14 of the 16 factors examined but that reliability was higher in paper-and-pencil studies than in web-based studies and was higher for behavioral intention judgments than for other (e.g., attitudinal and perceptual) judgments. We provide an agenda for future research. Finally, we provide several best-practice recommendations for researchers (and journal reviewers) with regard to (a) reporting test-retest reliability, (b) designing policy-capturing studies for appropriate reportage, and (c) properly interpreting test-retest reliability in policy-capturing studies.


2016 ◽  
Vol 40 (5) ◽  
pp. 243-244 ◽  
Author(s):  
Skye P. Barbic ◽  
Stefan J. Cano

SummaryThis commentary argues the importance of robust, meaningful assessment of clinical and functional outcomes in psychiatry. Outcome assessments should be fit for the purpose of measuring relevant concepts of interest in specific clinical settings. As well, the measurement model selected to develop and test assessments can be critical for guiding care. Three types of measurement models are presented: classical test theory, item response theory, and Rasch measurement theory. To optimise current diagnostic and treatment practices in psychiatry, careful consideration of these models is warranted.


2021 ◽  
pp. 153944922110608
Author(s):  
Lorrie George-Paschal ◽  
Nancy E. Krusen ◽  
Chia-Wei Fan

This study evaluated the psychometric properties of the Relative Mastery Scale (RMS). Valid and reliable client-centered instruments support practice in value-based health care and community-based settings. Participants were 368 community-dwelling adults aged 18 to 95 years. Researchers conducted validity and reliability examinations of the RMS using classical test theory and Rasch measurement model. A partial credit model allowed exploration of individual scale properties. Spearman’s correlation coefficients between items were statistically significant at the .01 level. Cronbach’s alpha coefficient was .94 showing strong internal consistency. In exploratory factor analysis, Factor 1 accounted for 71% of variance with an eigenvalue of 4.26. In Rasch analysis, the 5-point rating scale demonstrated adequate functioning, confirmed unidimensionality, and person/item separation. The RMS instrument demonstrates sound psychometric characteristics. A valid and reliable measure of internal occupational adaptation supports application to monitor progress of internal occupational adaptation across a variety of individuals.


2019 ◽  
Vol 29 (Supplement_4) ◽  
Author(s):  
H S Finbråten ◽  
A L Kleppang ◽  
A M Steigen

Abstract Background Questionnaires are frequently used in public health research. In order to provide valid and reliable results to generate recommendations for practice and policies, scales with sound psychometric properties are required. Classical test theory such as factor analysis is most frequently used to assess the psychometric properties of scales. However, classical test theory may have limitations in confirming the validity of scales. Only Rasch measurement theory meet the requirements of fundamental measurement, such as additivity, invariance, sufficiency and specific objectivity. The objective is to exemplify how Rasch measurement theory can be used to evaluate the psychometric properties of a scale. Validation of the Hopkins Symptom Checklist-10 is used as an example. Methods This study is based on cross-sectional data from the Youth Data Survey. In total, 6777 adolescents responded to a web-based questionnaire. Data collection was carried out in lower and upper secondary schools in Norway during 2018. The data were analysed using the partial credit parameterization of the unidimensional Rasch model. Results Preliminary results indicated that the scale had acceptable reliability (person separation index: 0.82). However, one pair of items shows response dependence. The targeting could have been better (mean person location: -1.445). All items had ordered thresholds. Three items under-discriminated. Several items displayed differential item functioning with regard to gender and school level. Conclusions Applying Rasch measurement theory measurement problems that would go undetected using classical test theory approaches were observed. Scales used in public health research should be thoroughly validated applying Rasch measurement theory before the data are used to support claims about public health and used to provide recommendations for policy and practice. Key messages Public health practice and policy should be based on information from valid and reliable scales. Rasch measurement theory should be used to evaluate psychometric properties of scales used in public health research.


Author(s):  
Phoom Praraksa ◽  
Wanida Simpol

Life and career skills are essential attributes for living in the 21st century because they are important to both learning and working in local and international workplaces. This study tried to create a measurement model of life and career skills and develop an online scale for investigating the psychometric property of the scale. The participants consisted of 646 primary students in Northern, Central, Southern and North eastern regions of Thailand. Then, the classical test theory, the multidimensional item response theory and the confirmatory factor analysis (CFA) were used for data analysis. The analysis of data appeared as item of difficulty index, discrimination power index and reliability of the scale. In addition, the multidimensional analysis and the CFA of data showed item fit and construct validity of the scale. This may lead to the development of clear and correct measure of students’ life and career skills structure. Policy implications are discussed. Keywords: Life and career skills, tentative model, online scale, construct validity.


2016 ◽  
Vol 76 (6) ◽  
pp. 976-985 ◽  
Author(s):  
Leanne M. Stanley ◽  
Michael C. Edwards

The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the reliability of scores and the fit of a corresponding measurement model to be either acceptable or unacceptable for a given situation, but these are not the only possible outcomes. This article focuses on situations in which model fit is deemed acceptable, but reliability is not. Data were simulated based on the item characteristics of the PROMIS (Patient Reported Outcomes Measurement Information System) anxiety item bank and analyzed using methods from classical test theory, factor analysis, and item response theory. Analytic techniques from different psychometric traditions were used to illustrate that reliability and model fit are distinct, and that disagreement among indices of reliability and model fit may provide important information bearing on a particular validity argument, independent of the data analytic techniques chosen for a particular research application. We conclude by discussing the important information gleaned from the assessment of reliability and model fit.


2020 ◽  
Author(s):  
Stephen Ross Martin ◽  
Philippe Rast

Reliability is a crucial concept in psychometrics. Although it is typically estimated as a single fixed quantity, previous work suggests that reliability can vary across persons, groups, and covariates. We propose a novel method for estimating and modeling case-specific reliability without repeated measurements or parallel tests. The proposed method employs a “Reliability Factor” that models the error variance of each case across multiple indicators, thereby producing case-specific reliability estimates. Additionally, we use Gaussian process modeling to a estimate non-linear, non-monotonic function between the latent factor itself and the reliability of the measure, providing an analogue to test information functions in item response theory. The reliability factor model is a new tool for examining latent regions with poor conditional reliability, and correlates thereof, in a classical test theory framework.


1977 ◽  
Vol 40 (2) ◽  
pp. 383-386 ◽  
Author(s):  
Donal E. Muir

Examination of the methodological literature of the behavioral and social sciences indicates that measurement terms are used differently than in the natural sciences. The rationale for these departures is usually ascribed to classical test theory, a measurement model claimed to be more applicable to psychological and social data than the traditional measurement model of the natural sciences, which requires the development of standard instruments defining, by consensus, parametric values. Classical test theory seemingly avoids this necessity, but only by inviting validation by fiat, resulting in instrument evaluations which are trivial, misleading, or invalid. The development of measurement in the behavioral and social sciences might be encouraged by the abandonment of classical test theory and a return to natural-science measurement theory.


Sign in / Sign up

Export Citation Format

Share Document