scholarly journals Neutralizing Gender Bias in Word Embeddings with Latent Disentanglement and Counterfactual Generation

Author(s):  
Seungjae Shin ◽  
Kyungwoo Song ◽  
JoonHo Jang ◽  
Hyemi Kim ◽  
Weonyoung Joo ◽  
...  
Keyword(s):  
Author(s):  
Pablo Badilla ◽  
Felipe Bravo-Marquez ◽  
Jorge Pérez

Word embeddings are known to exhibit stereotypical biases towards gender, race, religion, among other criteria. Severa fairness metrics have been proposed in order to automatically quantify these biases. Although all metrics have a similar objective, the relationship between them is by no means clear. Two issues that prevent a clean comparison is that they operate with different inputs, and that their outputs are incompatible with each other. In this paper we propose WEFE, the word embeddings fairness evaluation framework, to encapsulate, evaluate and compare fairness metrics. Our framework needs a list of pre-trained embeddings and a set of fairness criteria, and it is based on checking correlations between fairness rankings induced by these criteria. We conduct a case study showing that rankings produced by existing fairness methods tend to correlate when measuring gender bias. This correlation is considerably less for other biases like race or religion. We also compare the fairness rankings with an embedding benchmark showing that there is no clear correlation between fairness and good performance in downstream tasks.


2020 ◽  
Author(s):  
Katja Geertruida Schmahl ◽  
Tom Julian Viering ◽  
Stavros Makrodimitris ◽  
Arman Naseri Jahfari ◽  
David Tax ◽  
...  
Keyword(s):  

2020 ◽  
Vol 34 (05) ◽  
pp. 9434-9441
Author(s):  
Zekun Yang ◽  
Juan Feng

Word embedding has become essential for natural language processing as it boosts empirical performances of various tasks. However, recent research discovers that gender bias is incorporated in neural word embeddings, and downstream tasks that rely on these biased word vectors also produce gender-biased results. While some word-embedding gender-debiasing methods have been developed, these methods mainly focus on reducing gender bias associated with gender direction and fail to reduce the gender bias presented in word embedding relations. In this paper, we design a causal and simple approach for mitigating gender bias in word vector relation by utilizing the statistical dependency between gender-definition word embeddings and gender-biased word embeddings. Our method attains state-of-the-art results on gender-debiasing tasks, lexical- and sentence-level evaluation tasks, and downstream coreference resolution tasks.


2019 ◽  
Author(s):  
Christine Basta ◽  
Marta R. Costa-jussà ◽  
Noe Casas
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document