Bias reduction for endpoint estimation

Extremes ◽  
2010 ◽  
Vol 14 (4) ◽  
pp. 393-412 ◽  
Author(s):  
Deyuan Li ◽  
Liang Peng ◽  
Xinping Xu
2021 ◽  
Vol 23 (1) ◽  
pp. 69-85
Author(s):  
Hemank Lamba ◽  
Kit T. Rodolfa ◽  
Rayid Ghani

Applications of machine learning (ML) to high-stakes policy settings - such as education, criminal justice, healthcare, and social service delivery - have grown rapidly in recent years, sparking important conversations about how to ensure fair outcomes from these systems. The machine learning research community has responded to this challenge with a wide array of proposed fairness-enhancing strategies for ML models, but despite the large number of methods that have been developed, little empirical work exists evaluating these methods in real-world settings. Here, we seek to fill this research gap by investigating the performance of several methods that operate at different points in the ML pipeline across four real-world public policy and social good problems. Across these problems, we find a wide degree of variability and inconsistency in the ability of many of these methods to improve model fairness, but postprocessing by choosing group-specific score thresholds consistently removes disparities, with important implications for both the ML research community and practitioners deploying machine learning to inform consequential policy decisions.


Field Methods ◽  
2008 ◽  
Vol 21 (1) ◽  
pp. 69-90 ◽  
Author(s):  
Erik Van Ingen ◽  
Ineke Stoop ◽  
Koen Breedveld

Biometrics ◽  
1980 ◽  
Vol 36 (2) ◽  
pp. 293 ◽  
Author(s):  
Donald B. Rubin

Sign in / Sign up

Export Citation Format

Share Document