scholarly journals Balancing Biases and Preserving Privacy on Balanced Faces in the Wild

Author(s):  
Joseph Robinson ◽  
Yun Fu ◽  
Samson Timoner, ◽  
Yann Henon ◽  
Can qin

There are demographic biases in current models used for facial recognition (FR). Our Balanced Faces In the Wild (BFW) dataset serves as a proxy to measure bias across ethnicity and gender subgroups, allowing one to characterize FR performances per subgroup. We show performances are non-optimal when a single score threshold is used to determine whether sample pairs are genuine or imposter. Across subgroups, performance ratings vary from the reported across the entire dataset. Thus, claims of specific error rates only hold true for populations matching that of the validation data. We mitigate the imbalanced performances using a novel domain adaptation learning scheme on the facial features extracted using state-of-the-art. Not only does this technique balance performance, but it also boosts the overall performance. A benefit of the proposed is to preserve identity information in facial features while removing demographic knowledge in the lower dimensional features. The removal of demographic knowledge prevents future potential biases from being injected into decision-making. This removal satisfies privacy concerns. We explore why this works qualitatively; we also show quantitatively that subgroup classifiers can no longer learn from the features mapped by the proposed.

2021 ◽  
Author(s):  
Joseph Robinson ◽  
Yun Fu ◽  
Samson Timoner, ◽  
Yann Henon ◽  
Can qin

There are demographic biases in current models used for facial recognition (FR). Our Balanced Faces In the Wild (BFW) dataset serves as a proxy to measure bias across ethnicity and gender subgroups, allowing one to characterize FR performances per subgroup. We show performances are non-optimal when a single score threshold is used to determine whether sample pairs are genuine or imposter. Across subgroups, performance ratings vary from the reported across the entire dataset. Thus, claims of specific error rates only hold true for populations matching that of the validation data. We mitigate the imbalanced performances using a novel domain adaptation learning scheme on the facial features extracted using state-of-the-art. Not only does this technique balance performance, but it also boosts the overall performance. A benefit of the proposed is to preserve identity information in facial features while removing demographic knowledge in the lower dimensional features. The removal of demographic knowledge prevents future potential biases from being injected into decision-making. This removal satisfies privacy concerns. We explore why this works qualitatively; we also show quantitatively that subgroup classifiers can no longer learn from the features mapped by the proposed.


2009 ◽  
Author(s):  
Erin Winterrowd ◽  
Silvia Canetto ◽  
April Biasiolli ◽  
Nazanin Mohajeri-Nelson ◽  
Aki Hosoi ◽  
...  

2020 ◽  
Author(s):  
EAR Losin ◽  
CW Woo ◽  
NA Medina ◽  
JR Andrews-Hanna ◽  
Hedwig Eisenbarth ◽  
...  

© 2020, The Author(s), under exclusive licence to Springer Nature Limited. Understanding ethnic differences in pain is important for addressing disparities in pain care. A common belief is that African Americans are hyposensitive to pain compared to Whites, but African Americans show increased pain sensitivity in clinical and laboratory settings. The neurobiological mechanisms underlying these differences are unknown. We studied an ethnicity- and gender-balanced sample of African Americans, Hispanics and non-Hispanic Whites using functional magnetic resonance imaging during thermal pain. Higher pain report in African Americans was mediated by discrimination and increased frontostriatal circuit activations associated with pain rating, discrimination, experimenter trust and extranociceptive aspects of pain elsewhere. In contrast, the neurologic pain signature, a neuromarker sensitive and specific to nociceptive pain, mediated painful heat effects on pain report largely similarly in African American and other groups. Findings identify a brain basis for higher pain in African Americans related to interpersonal context and extranociceptive central pain mechanisms and suggest that nociceptive pain processing may be similar across ethnicities.


Author(s):  
Megan Bryson

This book follows the transformations of the goddess Baijie, a deity worshiped in the Dali region of southwest China’s Yunnan Province, to understand how local identities developed in a Chinese frontier region from the twelfth century to the twenty-first. Dali, a region where the cultures of China, India, Tibet, and Southeast Asia converge, has long served as a nexus of religious interaction even as its status has changed. Once the center of independent kingdoms, it was absorbed into the Chinese imperial sphere with the Mongol conquest and remained there ever since. Goddess on the Frontier examines how people in Dali developed regional religious identities through the lens of the local goddess Baijie, whose shifting identities over this span of time reflect shifting identities in Dali. She first appears as a Buddhist figure in the twelfth century, then becomes known as the mother of a regional ruler, next takes on the role of an eighth-century widow martyr, and finally is worshiped as a tutelary village deity. Each of her forms illustrates how people in Dali represented local identities through gendered religious symbols. Taken together, they demonstrate how regional religious identities in Dali developed as a gendered process as well as an ethno-cultural process. This book applies interdisciplinary methodology to a wide variety of newly discovered and unstudied materials to show how religion, ethnicity, and gender intersect in a frontier region.


Author(s):  
Timnit Gebru

This chapter discusses the role of race and gender in artificial intelligence (AI). The rapid permeation of AI into society has not been accompanied by a thorough investigation of the sociopolitical issues that cause certain groups of people to be harmed rather than advantaged by it. For instance, recent studies have shown that commercial automated facial analysis systems have much higher error rates for dark-skinned women, while having minimal errors on light-skinned men. Moreover, a 2016 ProPublica investigation uncovered that machine learning–based tools that assess crime recidivism rates in the United States are biased against African Americans. Other studies show that natural language–processing tools trained on news articles exhibit societal biases. While many technical solutions have been proposed to alleviate bias in machine learning systems, a holistic and multifaceted approach must be taken. This includes standardization bodies determining what types of systems can be used in which scenarios, making sure that automated decision tools are created by people from diverse backgrounds, and understanding the historical and political factors that disadvantage certain groups who are subjected to these tools.


Sign in / Sign up

Export Citation Format

Share Document