A Nonparametric Testing Procedure for Missing Data

Compstat ◽  
1994 ◽  
pp. 503-508
Author(s):  
Anna Giraldo ◽  
Andrea Pallini ◽  
Fortunato Pesarin
Biometrics ◽  
2020 ◽  
Vol 76 (4) ◽  
pp. 1133-1146
Author(s):  
Luz Adriana Pereira ◽  
Daniel Taylor‐Rodríguez ◽  
Luis Gutiérrez

2021 ◽  
Author(s):  
Jessica C. Lee ◽  
Mike Le Pelley ◽  
Peter Lovibond

Learning of cue-outcome relationships in associative learning experiments is often assessed by presenting cues without feedback about the outcome and informing participants to expect no outcomes to occur. The rationale is that this “no-feedback” testing procedure prevents new learning during testing that might contaminate the later test trials. We tested this assumption in three predictive learning experiments where participants were tasked with learning what foods (cues) were causing allergic reactions (the outcome) in a fictitious patient. We found that withholding feedback in a block of trials (Experiment 1) and in individual trials (Experiment 2) had no effect on causal ratings, but it led to regression towards intermediate ratings under conditions that encouraged participants to suspect that the missing feedback was systematically biased (Experiment 3). We conclude that the procedure of testing without feedback, used widely in studies of human cognition, is an appropriate way of assessing learning, as long as the missing data are attributed to the experimenter rather than a factor within the causal scenario.


2002 ◽  
Vol 18 (1) ◽  
pp. 78-84 ◽  
Author(s):  
Eva Ullstadius ◽  
Jan-Eric Gustafsson ◽  
Berit Carlstedt

Summary: Vocabulary tests, part of most test batteries of general intellectual ability, measure both verbal and general ability. Newly developed techniques for confirmatory factor analysis of dichotomous variables make it possible to analyze the influence of different abilities on the performance on each item. In the testing procedure of the Computerized Swedish Enlistment test battery, eight different subtests of a new vocabulary test were given randomly to subsamples of a representative sample of 18-year-old male conscripts (N = 9001). Three central dimensions of a hierarchical model of intellectual abilities, general (G), verbal (Gc'), and spatial ability (Gv') were estimated under different assumptions of the nature of the data. In addition to an ordinary analysis of covariance matrices, assuming linearity of relations, the item variables were treated as categorical variables in the Mplus program. All eight subtests fit the hierarchical model, and the items were found to load about equally on G and Gc'. The results also indicate that if nonlinearity is not taken into account, the G loadings for the easy items are underestimated. These items, moreover, appear to be better measures of G than the difficult ones. The practical utility of the outcome for item selection and the theoretical implications for the question of the origin of verbal ability are discussed.


1979 ◽  
Vol 24 (8) ◽  
pp. 670-670
Author(s):  
FRANZ R. EPTING ◽  
ALVIN W. LANDFIELD
Keyword(s):  

1979 ◽  
Vol 24 (12) ◽  
pp. 1058-1058
Author(s):  
AL LANDFIELD ◽  
FRANZ EPTING
Keyword(s):  

2013 ◽  
Author(s):  
Samantha Minski ◽  
Kristen Medina ◽  
Danielle Lespinasse ◽  
Stacey Maurer ◽  
Manal Alabduljabbar ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document