SOTAVerified

Non-convex Optimization for Learning a Fair Predictor under Equalized Loss Fairness Constraint

2021-09-29Unverified0· sign in to hype

Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan, Iman Vakilinia

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Supervised learning models have been increasingly used in various domains such as lending, college admission, natural language processing, face recognition, etc. These models may inherit pre-existing biases from training datasets and exhibit discrimination against protected social groups. Various fairness notions have been introduced to address fairness issues. In general, finding a fair predictor leads to a constrained optimization problem, and depending on the fairness notion, it may be non-convex. In this work, we focus on Equalized Loss (EL), a fairness notion that requires the prediction error/loss to be equalized across different demographic groups. Imposing this constraint to the learning process leads to a non-convex optimization problem even if the loss function is convex. We introduce algorithms that can leverage off-the-shelf convex programming tools and efficiently find the global optimum of this non-convex problem. In particular, we first propose the ELminimizer algorithm, which finds the optimal EL fair predictor by reducing the non-convex optimization problem to a sequence of convex constrained optimizations. We then propose a simple algorithm that is computationally more efficient compared to ELminimizer and finds a sub-optimal EL fair predictor using unconstrained convex programming tools. Experiments on real-world data show the effectiveness of our algorithms.

Tasks

Reproductions