scholarly journals A test-retest reliability analysis of diffusion measures of white matter tracts relevant for cognitive control

2016 ◽  
Vol 54 (1) ◽  
pp. 24-33 ◽  
Author(s):  
W. Boekel ◽  
B. U. Forstmann ◽  
M. C. Keuken
2010 ◽  
Vol 106 (3) ◽  
pp. 870-874 ◽  
Author(s):  
Paul B. Harris ◽  
John M. Houston

This study examined the reliability of the Revised Competitiveness Index by investigating the test-retest reliability, interitem reliability, and factor structure of the measure based on a sample of 280 undergraduates (200 women, 80 men) ranging in age from 18 to 28 years ( M = 20.1, SD = 2.1). The findings indicate that the Revised Competitiveness Index has high test-retest reliability, high interitem reliability, and a stable factor structure. The results support the assertion that the Revised Competitiveness Index assesses competitiveness as a stable trait rather than a dynamic state.


2019 ◽  
Vol 12 ◽  
Author(s):  
Mariem Boukadi ◽  
Karine Marcotte ◽  
Christophe Bedetti ◽  
Jean-Christophe Houde ◽  
Alex Desautels ◽  
...  

PLoS ONE ◽  
2013 ◽  
Vol 8 (12) ◽  
pp. e81410 ◽  
Author(s):  
Maren Strenziok ◽  
Pamela M. Greenwood ◽  
Sophia A. Santa Cruz ◽  
James C. Thompson ◽  
Raja Parasuraman

2020 ◽  
Author(s):  
Peter E Clayson ◽  
Kaylie Amanda Carbine ◽  
Scott Baldwin ◽  
Joseph A. Olsen ◽  
Michael J. Larson

The reliability of event-related brain potential (ERP) scores depends on study context and how those scores will be used, and reliability must be routinely evaluated. Many factors can influence ERP score reliability, and generalizability (G) theory provides a multifaceted approach to estimating the internal consistency and temporal stability of scores that is well suited for ERPs. G-theory’s approach possesses a number of advantages over classical test theory that make it ideal for pinpointing sources of error in scores. The current primer outlines the G-theory approach to estimating internal consistency (coefficients of equivalence) and test-retest reliability (coefficients of stability). This approach is used to evaluate the reliability of ERP measurements. The primer outlines how to estimate reliability coefficients that consider the impact of the number of trials, events, occasion, and group. The uses of two different G-theory reliability coefficients (i.e., generalizability and dependability) in ERP research are elaborated, and a dataset from the companion manuscript, which examines N2 amplitudes to Go/NoGo stimuli, is used as an example of the application of these coefficients to ERPs. The developed algorithms are implemented in the ERP Reliability Analysis (ERA) Toolbox, which is open-source software designed for estimating score reliability using G theory. The toolbox facilitates the application of G theory in an effort to simplify the study-by-study evaluation of ERP score reliability. The formulas provided in this primer should enable researchers to pinpoint the sources of measurement error in ERP scores from multiple recording sessions and subsequently plan studies that optimize score reliability.


PLoS ONE ◽  
2015 ◽  
Vol 10 (5) ◽  
pp. e0126326 ◽  
Author(s):  
Henry W. Chase ◽  
Jay C. Fournier ◽  
Tsafrir Greenberg ◽  
Jorge R. Almeida ◽  
Richelle Stiffler ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document