The Bias of Individuals (in Crowds): Why Implicit Bias Is Probably a Noisily Measured Individual-Level Construct

2020 ◽  
Vol 15 (6) ◽  
pp. 1329-1345 ◽  
Author(s):  
Paul Connor ◽  
Ellen R. K. Evers

Payne, Vuletich, and Lundberg’s bias-of-crowds model proposes that a number of empirical puzzles can be resolved by conceptualizing implicit bias as a feature of situations rather than a feature of individuals. In the present article we argue against this model and propose that, given the existing evidence, implicit bias is best understood as an individual-level construct measured with substantial error. First, using real and simulated data, we show how each of Payne and colleagues’ proposed puzzles can be explained as being the result of measurement error and its reduction via aggregation. Second, we discuss why the authors’ counterarguments against this explanation have been unconvincing. Finally, we test a hypothesis derived from the bias-of-crowds model about the effect of an individually targeted “implicit-bias-based expulsion program” within universities and show the model to lack empirical support. We conclude by considering the implications of conceptualizing implicit bias as a noisily measured individual-level construct for ongoing implicit-bias research. All data and code are available at https://osf.io/tj8u6/ .

2020 ◽  
Author(s):  
Paul Robert Connor ◽  
Ellen Riemke Katrien Evers

Payne, Vuletich, and Lundberg’s bias-of-crowds model proposes that a number of empirical puzzles can be resolved by conceptualizing implicit bias as a feature of situations rather than a feature of individuals. In the present article we argue against this model and propose that, given the existing evidence, implicit bias is best understood as an individual-level construct measured with substantial error. First, using real and simulated data, we show how each of Payne and colleagues’ proposed puzzles can be explained as being the result of measurement error and its reduction via aggregation. Second, we discuss why the authors’ counterarguments against this explanation have been unconvincing. Finally, we test a hypothesis derived from the bias-of-crowds model about the effect of an individually targeted “implicit-bias-based expulsion program” within universities and show the model to lack empirical support. We conclude by considering the implications of conceptualizing implicit bias as a noisily measured individual-level construct for ongoing implicit-bias research. All data and code are available at https://osf.io/tj8u6/.


2020 ◽  
Author(s):  
Keith Payne ◽  
Heidi A. Vuletich ◽  
Kristjen B. Lundberg

The Bias of Crowds model (Payne, Vuletich, & Lundberg, 2017) argues that implicit bias varies across individuals and across contexts. It is unreliable and weakly associated with behavior at the individual level. But when aggregated to measure context-level effects, the scores become stable and predictive of group-level outcomes. We concluded that the statistical benefits of aggregation are so powerful that researchers should reconceptualize implicit bias as a feature of contexts, and ask new questions about how implicit biases relate to systemic racism. Connor and Evers (2020) critiqued the model, but their critique simply restates the core claims of the model. They agreed that implicit bias varies across individuals and across contexts; that it is unreliable and weakly associated with behavior at the individual level; and that aggregating scores to measure context-level effects makes them more stable and predictive of group-level outcomes. Connor and Evers concluded that implicit bias should be considered to really be noisily measured individual construct because the effects of aggregation are merely statistical. We respond to their specific arguments and then discuss what it means to really be a feature of persons versus situations, and multilevel measurement and theory in psychological science more broadly.


Author(s):  
Alice R. Carter ◽  
Eleanor Sanderson ◽  
Gemma Hammerton ◽  
Rebecca C. Richmond ◽  
George Davey Smith ◽  
...  

AbstractMediation analysis seeks to explain the pathway(s) through which an exposure affects an outcome. Traditional, non-instrumental variable methods for mediation analysis experience a number of methodological difficulties, including bias due to confounding between an exposure, mediator and outcome and measurement error. Mendelian randomisation (MR) can be used to improve causal inference for mediation analysis. We describe two approaches that can be used for estimating mediation analysis with MR: multivariable MR (MVMR) and two-step MR. We outline the approaches and provide code to demonstrate how they can be used in mediation analysis. We review issues that can affect analyses, including confounding, measurement error, weak instrument bias, interactions between exposures and mediators and analysis of multiple mediators. Description of the methods is supplemented by simulated and real data examples. Although MR relies on large sample sizes and strong assumptions, such as having strong instruments and no horizontally pleiotropic pathways, our simulations demonstrate that these methods are unaffected by confounders of the exposure or mediator and the outcome and non-differential measurement error of the exposure or mediator. Both MVMR and two-step MR can be implemented in both individual-level MR and summary data MR. MR mediation methods require different assumptions to be made, compared with non-instrumental variable mediation methods. Where these assumptions are more plausible, MR can be used to improve causal inference in mediation analysis.


2021 ◽  
Author(s):  
Matthew J. Hasenjager ◽  
William Hoppitt ◽  
Ellouise Leadbeater

AbstractHoneybees famously use waggle dances to communicate foraging locations to nestmates in the hive, thereby recruiting them to those sites. The decision to dance is governed by rules that, when operating collectively, are assumed to direct foragers to the most profitable locations with little input from potential recruits, who are presumed to respond similarly to any dance regardless of its information content. Yet variation in receiver responses can qualitatively alter collective outcomes. Here, we use network-based diffusion analysis to compare the collective influence of dance information during recruitment to feeders at different distances. We further assess how any such effects might be achieved at the individual level by dance-followers either persisting with known sites when novel targets are distant and/or seeking more accurate spatial information to guide long-distance searches. Contrary to predictions, we found no evidence that dance-followers’ responses depended on target distance. While dance information was always key to feeder discovery, its importance did not vary with feeder distance, and bees were in fact quicker to abandon previously rewarding sites for distant alternatives. These findings provide empirical support for the longstanding assumption that self-organized foraging by honeybee colonies relies heavily on signal performance rules with limited input from recipients.


2019 ◽  
Author(s):  
Alice R Carter ◽  
Eleanor Sanderson ◽  
Gemma Hammerton ◽  
Rebecca C Richmond ◽  
George Davey Smith ◽  
...  

AbstractMediation analysis seeks to explain the pathway(s) through which an exposure affects an outcome. Mediation analysis experiences a number of methodological difficulties, including bias due to confounding and measurement error. Mendelian randomisation (MR) can be used to improve causal inference for mediation analysis. We describe two approaches that can be used for estimating mediation analysis with MR: multivariable Mendelian randomisation (MVMR) and two-step Mendelian randomisation. We outline the approaches and provide code to demonstrate how they can be used in mediation analysis. We review issues that can affect analyses, including confounding, measurement error, weak instrument bias, and analysis of multiple mediators. Description of the methods is supplemented by simulated and real data examples. Although Mendelian randomisation relies on large sample sizes and strong assumptions, such as having strong instruments and no horizontally pleiotropic pathways, our examples demonstrate that it is unlikely to be affected by confounders of the exposure or mediator and the outcome, reverse causality and non-differential measurement error of the exposure or mediator. Both MVMR and two-step MR can be implemented in both individual-level MR and summary data MR, and can improve causal inference in mediation analysis.


Author(s):  
Randy Borum ◽  
Mary Rowe

Bystanders—those who observe or come to know about potential wrongdoing—are often the best source of preattack intelligence, including indicators of intent and “warning” behaviors. They are the reason that some planned attacks are foiled before they occur. Numerous studies of targeted violence (e.g., mass shootings and school shootings) have demonstrated that peers and bystanders often have knowledge of an attacker’s intentions, concerning communication, and troubling behavior before the attack occurs. This chapter describes—with empirical support—why threat assessment professionals should consider bystanders; outlines a model for understanding bystander decision-making; reviews common barriers to bystander reporting; and suggests ways to mitigate those barriers, to engage bystanders at an individual level, and to improve reporting. The principal aim of threat assessment is to prevent (primarily) intentional acts of harm. When tragic incidents of planned violence occur, however, it is almost always uncovered “that someone knew something” about the attack before it happened. This happens because, as attack plans unfold, people in several different roles may know, or come to know, something about what is happening before harm occurs. The perpetrators know, and so might others, including targets, family members, friends, coworkers, or even casual observers.


2015 ◽  
Vol 27 (1) ◽  
pp. 3-39 ◽  
Author(s):  
Nadine M Schöneck

Advanced modernity is regarded as an era of time obsession and people in modernized societies seem to live harried lives. Leading time sociologists like Hartmut Rosa adopt a modernization–critical stance and ascribe an accelerated pace of life and frequent time scarcity to socioeconomic and technological advancement. According to these protagonists of the “acceleration debate,” time becomes increasingly precious due to severely changed conditions of work and private life. Against this background it can be assumed that many people may suffer from an unsatisfactory work–life balance. This study uses individual-level data from the fifth round of the European Social Survey (fielded in 2010/11) as well as suitable country-level data capturing key features of advanced modernity to empirically test assumptions arising from the “acceleration debate.” Results from multilevel analyses of 23 European countries provide some confirmation of these assumptions. While most macro indicators for 2010 reflecting a certain stage of development are uninfluential, a country's degree of globalization matters, and moreover growth rates of crucial macro indicators signaling paces of development exert an impact on people's work–life balance in the assumed direction: In countries with accelerations in terms of economic development, coverage of households with internet access and numbers of new cars working people show a significantly greater inclination toward an unsatisfactory work–life balance. Aside from results at the country-level individual-level determinants and group-specific differences of work–life balance under different conditions of advanced modernity are presented. This study's two main findings—(1) paces of development matter more than stages of development and (2) assumptions arising from the “acceleration debate” receive some empirical support—are thoroughly reflected on and discussed.


1988 ◽  
Vol 41 (1) ◽  
pp. 1-20 ◽  
Author(s):  
Bruce Bueno de Mesquita ◽  
David Lalman

Systemic theorists emphasize the interplay of the distribution of power, the number of poles, and their tightness in predicting the occurrence of major-power war. The authors link individual-level incentives to these systemic constraints as factors that might affect the likelihood of war. They believe that their model specification is more comprehensive than any prior effort to evaluate the impact of structural attributes on the risk of major-power war. Empirical results from the individual-level prespective are encouraging when one examines European crises from 1816 to 1965, but there is no evidence that decision makers were significantly constrained by variations in the structural attributes. Neither the distribution of power nor the number or tightness of poles appears to influence the risk of war.


2019 ◽  
Author(s):  
Cong Ma ◽  
Carl Kingsford

AbstractMutual information is widely used to characterize dependence between biological signals, such as co-expression between genes or co-evolution between amino acids. However, measurement error of the biological signals is rarely considered in estimating mutual information. Measurement error is widespread and non-negligible in some cases. As a result, the distribution of the signals is blurred, and the mutual information may be biased when estimated using the blurred measurements. We derive a corrected estimator for mutual information that accounts for the distribution of measurement error. Our corrected estimator is based on the correction of the probability mass function (PMF) or probability density function (PDF, based on kernel density estimation). We prove that the corrected estimator is asymptotically unbiased in the (semi-) discrete case when the distribution of measurement error is known. We show that it reduces the estimation bias in the continuous case under certain assumptions. On simulated data, our corrected estimator leads to a more accurate estimation for mutual information when the sample size is not the limiting factor for estimating PMF or PDF accurately. We compare the uncorrected and corrected estimator on the gene expression data of TCGA breast cancer samples and show a difference in both the value and the ranking of estimated mutual information between the two estimators.


2016 ◽  
Vol 13 (1) ◽  
Author(s):  
Jose Pina-Sánchez

It is widely accepted that due to memory failures retrospective survey questions tend to be prone to measurement error. However, the proportion of studies using such data that attempt to adjust for the measurement problem is shockingly low. Arguably, to a great extent this is due to both the complexity of the methods available and the need to access a subsample containing either a gold standard or replicated values. Here I suggest the implementation of a version of SIMEX capable of adjusting for the types of multiplicative measurement errors associated with memory failures in the retrospective report of durations of life-course events. SIMEX is a method relatively simple to implement and it does not require the use of replicated or validation data so long as the error process can be adequately specified. To assess the effectiveness of the method I use simulated data. I create twelve scenarios based on the combinations of three outcome models (linear, logit and Poisson) and four types of multiplicative errors (non-systematic, systematic negative, systematic positive and heteroscedastic) affecting one of the explanatory variables. I show that SIMEX can be satisfactorily implemented in each of these scenarios. Furthermore, the method can also achieve partial adjustments even in scenarios where the actual distribution and prevalence of the measurement error differs substantially from what is assumed in the adjustment, which makes it an interesting sensitivity tool in those cases where all that is known about the error process is reduced to an educated guess.


Sign in / Sign up

Export Citation Format

Share Document