A Confidence Interval Approach for Variance Component Estimates in the Context of Generalizability Theory

1982 ◽  
Vol 42 (2) ◽  
pp. 459-466 ◽  
Author(s):  
Philip L. Smith
1978 ◽  
Vol 3 (4) ◽  
pp. 319-346 ◽  
Author(s):  
Philip L. Smith

The paper describes the small sample stability of least square estimates of variance components within the context of generalizability theory. Monte Carlo methods are used to generate data conforming to some selected multifacet generalizability designs to illustrate the sampling behavior of variance component estimates. Based on the findings, recommendations are made concerning the design of efficient small sample generalizability studies.


2018 ◽  
Author(s):  
Joel Eduardo Martinez ◽  
Friederike Funk ◽  
Alexander Todorov

A fundamental psychological problem is identifying the idiosyncratic and shared contributions to stimulus evaluation. However, there is no established method for estimating these contributions and the existing methods have led to divergent estimates. Moreover, in many studies participants rate the stimuli only once, although at least two measurements are required to estimate idiosyncratic contributions. Here, participants rated faces or novel objects on four dimensions (beautiful, approachable, likeable, dangerous) for a total of ten blocks to better estimate the preferences of individual raters. First, we show that both intra-rater and inter-rater agreement – measures related to idiosyncratic and shared contributions, respectively – increase with repeated measures. Second, to find best practices, we compared estimates from correlation indices and variance component approaches on stimulus-generality, evaluation-generality, data preprocessing steps, and sensitivity to measurement error (a largely ignored issue). The correlation indices changed monotonically and nonlinearly with more repeated measures. Variance component analyses showed large variability in estimates from only two repeated measures, but stabilized with more measures. While there was general agreement among approaches, the correlation approach was problematic for certain stimulus types and evaluation dimensions. Our results suggest that variance component estimates are more reliable as long as one collects more than two repeated measures, which is not the current norm in psychological research, and can be implemented using mixed models with crossed random effects. Recommendations for analysis and interpretations are provided.


2015 ◽  
Vol 93 (11) ◽  
pp. 5153-5163 ◽  
Author(s):  
A. M. Putz ◽  
F. Tiezzi ◽  
C. Maltecca ◽  
K. A. Gray ◽  
M. T. Knauer

1988 ◽  
Vol 10 (3) ◽  
pp. 144-146 ◽  
Author(s):  
K. F. Yee

A statistically significant difference in mean values between two laboratory quantitation methods is interpreted as a bias. Sometimes such a difference is so minute that it does not constitute any practical concern. An alternative approach is to test statistically whether the two methods are close enough, not for equality. This is to look at the confidence interval of the mean method difference and does not entail any additional statistical tests.


Sign in / Sign up

Export Citation Format

Share Document