SOTAVerified

FAIRM: Learning invariant representations for algorithmic fairness and domain generalization with minimax optimality

2024-04-02Code Available0· sign in to hype

Sai Li, Linjun Zhang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Machine learning methods often assume that the test data have the same distribution as the training data. However, this assumption may not hold due to multiple levels of heterogeneity in applications, raising issues in algorithmic fairness and domain generalization. In this work, we address the problem of fair and generalizable machine learning by invariant principles. We propose a training environment-based oracle, FAIRM, which has desirable fairness and domain generalization properties under a diversity-type condition. We then provide an empirical FAIRM with finite-sample theoretical guarantees under weak distributional assumptions. We then develop efficient algorithms to realize FAIRM in linear models and demonstrate the nonasymptotic performance with minimax optimality. We evaluate our method in numerical experiments with synthetic data and MNIST data and show that it outperforms its counterparts.

Tasks

Reproductions