scholarly journals Evaluating the Internal Consistency of Subtraction-Based and Residualized Difference Scores: Considerations for Studies of Event-Related Potentials

2020 ◽  
Author(s):  
Peter E Clayson ◽  
Scott Baldwin ◽  
Michael J. Larson

In studies of event-related brain potentials (ERPs), difference scores between conditions in a task are frequently used to isolate neural activity for use as a dependent or independent variable. Adequate score reliability is a prerequisite for studies examining relationships between ERPs and external correlates, but there is a widely held view that difference scores are inherently unreliable and unsuitable for studies of individual differences. This view fails to consider the nuances of difference score reliability that are relevant to ERP research. In the present study, we provide formulas from classical test theory and generalizability theory for estimating the internal consistency of subtraction-based and residualized difference scores. These formulas are then applied to error-related negativity (ERN) and reward positivity (RewP) difference scores from the same sample of 117 participants. Analyses demonstrate that ERN difference scores can be reliable, which supports their use in studies of individual differences. However, RewP difference scores yielded poor reliability due to the high correlation between the constituent reward and non-reward ERPs. Findings emphasize that difference score reliability largely depends on the internal consistency of constituent scores and the correlation between those scores. Furthermore, generalizability theory estimates yielded higher internal consistency estimates for subtraction-based difference scores than classical test theory estimates did. Despite some beliefs that difference scores are inherently unreliable, ERP difference scores can show adequate reliability and be useful for isolating neural activity in studies of individual differences.

Author(s):  
Susanne Hempel

This chapter discusses reliability. It outlines the nature and purpose of reliability, classical test theory, measures of reliability (measure orientated reliability, parallel test, and test-retest) as well as internal consistency, inter-item correlation, coefficient alpha, and categorical judgements.


2020 ◽  
Author(s):  
Pingguang lei ◽  
zheng yang ◽  
wei li ◽  
jingqing ou ◽  
yingli cun ◽  
...  

Abstract Background Quality of life (QOL) is now concerned worldwide in cancer clinical fields and the specific instrument FACT-Hep (Functional Assessment of Cancer Therapy- Hepatobiliary questionnaire) is widely used in English-spoken countries. However, the specific instruments for hepatocellular carcinoma patients in China were seldom and no formal validation on the Simplified Chinese Version of the FACT-Hep was carried out. This study was aimed to validate the Chinese FACT-Hep based on Combinations of Classical Test Theory and Generalizability Theory. Methods The Chinese Version of FACT-Hep and the QLICP-LI were used to measure QOL three times before and after treatments from a sample of 114 in-patients of hepatocellular carcinoma. The scale were evaluated by indicators such as validity and reliability coefficients Cronbach α, Pearson r, intra-class correlation (ICC), and standardized response mean. The Generalizability Theory (G theory) was also applied to addresses the dependability of measurements and estimation of multiple sources of variance. Results The Internal consistency Cronbach’s α coefficients were greater than 0.70 for all domains, and test-retest reliability coefficients for all domains and the overall were greater than 0.80 (exception of emotional Well-being 0.74) with the range from 0.81 to 0.96. G-coefficients and Ф-coefficients confirmed the reliability of the scale further with exact variance components. The domains of PWB, FWB and the overall scale had significant changes after treatments with SRM ranging from 0.40 to 0.69. Conclusions The Chinese version of FACT-Hep has good validity, reliability, and responsiveness, and can be used to measure QOL for patients with hepatocellular carcinoma in China.


2017 ◽  
Vol 2 (1) ◽  
pp. 34
Author(s):  
Rida Sarwiningsih

<p>This research aims to compare the internal consistency of reliability coefficient on classical test theory. Estimation accuracy of internal consistency reliability coefficient used several methods of the coefficient reliability formulation. The methods are Split-Half Method, Cronbach Alpha formula, and Kuder Richardson formula.  Determination of the test reliability coefficients used also some formula and then their results were compared with the results of their estimation accuracy. This research is a quantitative descriptive. Data were analyzed based on responses of national chemistry examination in Jambi province on academic year 2014/2015. The data of students answer sheets were taken using proportional stratified random sampling technique. There are 200 students’ responses from 162 schools (132 public schools and 30 private schools) in Jambi province. The form of data were dichotomy data and analyzed using Split-Half Method. Their reliabilities were analyzed using Cronbach Alpha formula and Kuder Richardson formula. Reliability criteria used consist of five conditions, they are 0.5; 0.6; 0.7; 0.8 and 0.9. The results of this research indicated that (a) the coefficient of reliability in classical test theory developed by measurement experts (using Split-Half Method, Cronbach Alpha formula and Kuder Richardson formula) have varying estimates of accuracy;  (b) average reliability coefficients have the precision estimation about of 0.78 up to 0.8; (c) the reliability coefficient using Spearman Brown formula was 0.78, with Rulon formula was 0.78, Flanagan formula was 0.77, Cronbach Alpha formula was 0.838, the KR20 formula was 0.838, and KR21 formula was 0.82<em>1.</em></p>


2001 ◽  
Vol 89 (2) ◽  
pp. 291-307 ◽  
Author(s):  
Gilbert Becker

Violation of either of two basic assumptions in classical test theory may lead to biased estimates of reliability. Violation of the assumption of essential tau-equivalence may produce underestimates, and the presence of correlated errors among measurement units may result in overestimates. The ubiquity of circumstances in which this problem may occur is not fully comprehended by many workers. This article surveys a variety of settings in which biased reliability estimates may be found in an effort to increase awareness of the prevalence of the problem.


2014 ◽  
Vol 35 (4) ◽  
pp. 250-261 ◽  
Author(s):  
Matthias Ziegler ◽  
Arthur Poropat ◽  
Julija Mell

Short personality questionnaires are increasingly used in research and practice, with some scales including as few as two to five items per personality domain. Despite the frequency of their use, these short scales are often criticized on the basis of their reduced internal consistencies and their purported failure to assess the breadth of broad constructs, such as the Big 5 factors of personality. One reason for this might be the use of principles routed in Classical Test Theory during test construction. In this study, Generalizability Theory is used to compare psychometric properties of different scales based on the NEO-PI-R and BFI, two widely-used personality questionnaire families. Applying both Classical Test Theory (CTT) and Generalizability Theory (GT) allowed to identify the inner workings of test shortening. CTT-based analyses indicated that longer is generally better for reliability, while GT allowed differentiation between reliability for relative and absolute decisions, while revealing how different variance sources affect test score reliability estimates. These variance sources differed with scale length, and only GT allowed clear description of these internal consequences, allowing more effective identification of advantages and disadvantages of shorter and longer scales. Most importantly, the findings highlight the potential error proneness of focusing solely on reliability and scale length in test construction. Practical as well as theoretical consequences are discussed.


Sign in / Sign up

Export Citation Format

Share Document