SOTAVerified

Training individually fair ML models with Sensitive Subspace Robustness

2019-06-28ICLR 2020Code Available0· sign in to hype

Mikhail Yurochkin, Amanda Bower, Yuekai Sun

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We consider training machine learning models that are fair in the sense that their performance is invariant under certain sensitive perturbations to the inputs. For example, the performance of a resume screening system should be invariant under changes to the gender and/or ethnicity of the applicant. We formalize this notion of algorithmic fairness as a variant of individual fairness and develop a distributionally robust optimization approach to enforce it during training. We also demonstrate the effectiveness of the approach on two ML tasks that are susceptible to gender and racial biases.

Tasks

Reproductions