scholarly journals Meta-Analytic Use of Balanced Identity Theory to Validate the Implicit Association Test

2020 ◽  
pp. 014616722091663 ◽  
Author(s):  
Dario Cvencek ◽  
Andrew N. Meltzoff ◽  
Craig D. Maddox ◽  
Brian A. Nosek ◽  
Laurie A. Rudman ◽  
...  

This meta-analysis evaluated theoretical predictions from balanced identity theory (BIT) and evaluated the validity of zero points of Implicit Association Test (IAT) and self-report measures used to test these predictions. Twenty-one researchers contributed individual subject data from 36 experiments (total N = 12,773) that used both explicit and implicit measures of the social–cognitive constructs. The meta-analysis confirmed predictions of BIT’s balance–congruity principle and simultaneously validated interpretation of the IAT’s zero point as indicating absence of preference between two attitude objects. Statistical power afforded by the sample size enabled the first confirmations of balance–congruity predictions with self-report measures. Beyond these empirical results, the meta-analysis introduced a within-study statistical test of the balance–congruity principle, finding that it had greater efficiency than the previous best method. The meta-analysis’s full data set has been publicly archived to enable further studies of interrelations among attitudes, stereotypes, and identities.

2016 ◽  
Author(s):  
Brian A. Nosek ◽  
Frederick L. Smyth

Recent theoretical and methodological innovations suggest a distinction between automatic and controlled evaluative processes. We report a construct validation investigation of the Implicit Association Test (IAT) as a measure of attitudes. In Study 1, a composite of 57 unique studies (total N=13,165), correlated two-factor (implicit and explicit attitudes) structural models fit the data better than single-factor (attitude) models for each of 57 different domains (e.g., cats-dogs). In Study 2, we distinguished attitude and method factors with a multitrait-multimethod design: N=287 participants were measured on both self-report and IAT for up to seven attitude domains. With systematic method variance accounted for, a correlated two-factor-per-attitude- contrast model was again superior to a single-factor-per-attitude specification. We conclude that these implicit and explicit measures assess related but distinct attitude constructs.


2008 ◽  
Vol 24 (4) ◽  
pp. 226-236 ◽  
Author(s):  
Brian A. Nosek ◽  
Jeffrey J. Hansen

In an effort to remove a presumed confound of extrapersonal associations, Olson and Fazio (2004 ) introduced procedural modifications to attitude versions of the Implicit Association Test (IAT). We hypothesized that the procedural changes increased the likelihood that participants would explicitly evaluate the target concepts (e.g., rating Black and White faces as liked or disliked). Results of a mega-study covering 58 topics and six additional studies (Total N = 15,667) suggest that: (a) after personalizing, participants are more likely to explicitly evaluate target concepts instead of categorizing them according to the performance rules, (b) this effect appears to account for the personalized IAT’s enhanced correlations with self-report, (c) personalizing does not alter the relationship between the IAT and cultural knowledge, and (d) personalized and original procedures each capture unique attitude variation. These results provide an alternative interpretation of the impact of personalizing the IAT. Additional innovation may determine whether personalizing implicit cognition is viable.


2005 ◽  
Vol 31 (10) ◽  
pp. 1369-1385 ◽  
Author(s):  
Wilhelm Hofmann ◽  
Bertram Gawronski ◽  
Tobias Gschwendner ◽  
Huy Le ◽  
Manfred Schmitt

2009 ◽  
Vol 97 (1) ◽  
pp. 17-41 ◽  
Author(s):  
Anthony G. Greenwald ◽  
T. Andrew Poehlman ◽  
Eric Luis Uhlmann ◽  
Mahzarin R. Banaji

2020 ◽  
pp. 174569161989796 ◽  
Author(s):  
Michelangelo Vianello ◽  
Yoav Bar-Anan

In this commentary, we welcome Schimmack’s reanalysis of Bar-Anan and Vianello’s multitrait multimethod (MTMM) data set, and we highlight some limitations of both the original and the secondary analyses. We note that when testing the fit of a confirmatory model to a data set, theoretical justifications for the choices of the measures to include in the model and how to construct the model improve the informational value of the results. We show that making different, theory-driven specification choices leads to different results and conclusions than those reported by Schimmack (this issue, p. ♦♦♦). Therefore, Schimmack’s reanalyses of our data are insufficient to cast doubt on the Implicit Association Test (IAT) as a measure of automatic judgment. We note other reasons why the validation of the IAT is still incomplete but conclude that, currently, the IAT is the best available candidate for measuring automatic judgment at the person level.


2016 ◽  
Author(s):  
Anthony G. Greenwald ◽  
Brian A. Nosek ◽  
Mahzarin R. Banaji

In reporting Implicit Association Test (IAT) results, researchers have most often used scoring conventions described in the first publication of the IAT (A. G. Greenwald, D. E. McGhee, & J. L. K. Schwartz, 1998). Demonstration IATs available on the Internet have produced large data sets that were used here to evaluate alternative scoring procedures. Candidate new algorithms were examined in terms of their (a) correlations with parallel self- report measures, (b) resistance to an artifact associated with speed of responding, (c) internal consistency, (d) sensitivity to known influences on IAT measures, and (e) resistance to known procedural influences. The best-performing measure incorporates data from the IAT’s practice trials, uses a metric that is calibrated by each respondent’s latency variability, and includes a latency penalty for errors. This new algorithm strongly outperforms the earlier (conventional) procedure.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
David J. Johnson ◽  
David Ampofo ◽  
Serra A Erbas ◽  
Alison Robey ◽  
Harry Calvert ◽  
...  

The implicit association test (IAT) is widely used to measure evaluative associations towards groups or the self but is influenced by other traits. Siegel, Dougherty, and Huber (2012, Journal of Experimental Social Psychology) found that manipulating cognitive control via false feedback (Study 3) changed the degree to which the IAT was related to cognitive control versus evaluative associations. We conducted two replications of this study and a mini meta-analysis. Null-hypothesis tests, meta-analysis, and a small telescope approach demonstrated weak to no support for the original hypotheses. We conclude that the original findings are unreliable and that both the original study and our replications do not provide evidence that manipulating cognitive control affects IAT scores.


2019 ◽  
Author(s):  
Louis H. Irving ◽  
Colin Smith

The Implicit Association Test (IAT) is nearly synonymous with the implicit attitude construct. At the same time, correlations between the IAT and criterion measures are often remarkably low. Developed within research using explicit measures of attitudes, the correspondence principle posits that measures should better predict criteria when there is a match in terms of the level of generality or specificity at which both are conceptualized (Ajzen & Fishbein, 1977). As such, weak implicit-criterion correlations are to be expected when broad general implicit measures are used to predict highly specific criteria. Research using explicit measures of attitudes consistently supports the correspondence principle, but conceptual correspondence is rarely considered by researchers using implicit measures to predict behavior and other relevant criterion measures. In five experiments (total N = 4650), we provide the first direct evidence demonstrating the relevance of the correspondence principle to the predictive validity of the IAT and Single Target IAT. That said, it is not the case that the IAT always predicts criteria better when correspondence is high. Inconsistency across the pattern of results suggests there is much more that remains to be understood about the relevance of the correspondence principle to the implicit-criterion relationship. Taken together, however, our findings suggest that conceptual correspondence typically increases (and never decreases) the magnitude of implicit-behavior and implicit-explicit relationships. We provide a framework for future research necessary to establish when correspondence is more likely to increase the predictive validity of measures such as the IAT.


2019 ◽  
Vol 74 (5) ◽  
pp. 569-586 ◽  
Author(s):  
Benedek Kurdi ◽  
Allison E. Seitchik ◽  
Jordan R. Axt ◽  
Timothy J. Carroll ◽  
Arpi Karapetyan ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document