SOTAVerified

Counterfactually Fair Representation

2023-11-09NeurIPS 2023Code Available0· sign in to hype

Zhiqun Zuo, Mohammad Mahdi Khalili, Xueru Zhang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The use of machine learning models in high-stake applications (e.g., healthcare, lending, college admission) has raised growing concerns due to potential biases against protected social groups. Various fairness notions and methods have been proposed to mitigate such biases. In this work, we focus on Counterfactual Fairness (CF), a fairness notion that is dependent on an underlying causal graph and first proposed by Kusner et al.~kusner2017counterfactual; it requires that the outcome an individual perceives is the same in the real world as it would be in a "counterfactual" world, in which the individual belongs to another social group. Learning fair models satisfying CF can be challenging. It was shown in kusner2017counterfactual that a sufficient condition for satisfying CF is to not use features that are descendants of sensitive attributes in the causal graph. This implies a simple method that learns CF models only using non-descendants of sensitive attributes while eliminating all descendants. Although several subsequent works proposed methods that use all features for training CF models, there is no theoretical guarantee that they can satisfy CF. In contrast, this work proposes a new algorithm that trains models using all the available features. We theoretically and empirically show that models trained with this method can satisfy CFThe code repository for this work can be found in https://github.com/osu-srml/CF_Representation_Learning.

Tasks

Reproductions